"I want Llama3 to perform 10x with my private knowledge" - Local Agentic RAG w/ llama3

Поделиться
HTML-код
  • Опубликовано: 22 май 2024
  • Advanced RAG 101 - build agentic RAG with llama3
    Get free HubSpot report of how AI is redefining startup GTM strategy: clickhubspot.com/4hx
    🔗 Links
    - Follow me on twitter: / jasonzhou1993
    - Join my AI email list: www.ai-jason.com/
    - My discord: / discord
    - Corrective RAG agent: github.com/langchain-ai/langg...
    - LlamaParse: github.com/run-llama/llama_parse
    - Firecrawl: www.firecrawl.dev/
    - Jerry Liu build production-ready RAG: • Building Production-Re...
    ⏱️ Timestamps
    0:00 Intro
    1:33 How to give LLM knowledge
    3:05 Problem with simple RAG
    5:55 Better Parser
    9:01 Chunk size
    11:40 Rerank
    12:39 Hybrid search
    13:10 Agentic RAG - Query translation
    14:35 Agentic RAG - metadata filtering
    15:52 Agentic RAG - Corrective RAG agent
    17:33 Install LLama3
    18:00 Code walkthrough
    👋🏻 About Me
    My name is Jason Zhou, a product designer who shares interesting AI experiments & products. Email me if you need help building AI apps! ask@ai-jason.com
    #llama3 #rag #llamaparse #llamaindex #gpt5 #autogen #gpt4 #autogpt #ai #artificialintelligence #tutorial #stepbystep #openai #llm #chatgpt #largelanguagemodels #largelanguagemodel #bestaiagent #chatgpt #agentgpt #agent #babyagi
  • НаукаНаука

Комментарии • 166

  • @Jim-ey3ry
    @Jim-ey3ry 22 дня назад +94

    This is prob one of the best RAG video I've seen, so many learnings in 20 mins

  • @shyamvai
    @shyamvai 18 дней назад +3

    One of the most informative RAG videos I’ve seen. Can’t wait to see more from your channel.

  • @kenchang3456
    @kenchang3456 22 дня назад +49

    Man, your videos keep getting better every time I look. You have a great mind and your presentation is excellent. Thank you very much, again, for sharing!

    • @magicismagic123
      @magicismagic123 14 дней назад +1

      he is much better than 99.9% wanna be over hyped ai gurus on youtubu, twitter and linkedin!

  • @titusblair
    @titusblair 22 дня назад +3

    Yet again an amazing tutorial, thanks so much Jason!

  • @FightFlixTv
    @FightFlixTv 21 день назад +2

    This is the best RAG video on the internet, awesome job, no fluff, high complexity but easy to understand, nice work

  • @CynicalWilson
    @CynicalWilson 22 дня назад +20

    Holy crap! This gave me such amazing background knowledge, love it! Now, what would be extra cool, would be if you could do a real "hands-on" type of workshop to go through it all by setting up the environment completely, including the actual training/RAG implementation of a set of various document types (PDF, excel, website etc..) to extend a locally running llama 3 instance 😊

  • @starmap
    @starmap 11 дней назад +1

    Great content! Thanks for putting in the effort. Will use this.

  • @jaanireel
    @jaanireel 19 дней назад +16

    00:05 AI can revolutionize Knowledge Management
    01:46 Llama3 can process precise knowledge with fast inference
    05:27 Market strategy for AI startups
    07:16 Convert PDF files to markdown format for enhanced accuracy and control
    10:47 Finding the optimal chunk size through experiments
    12:34 Hybrid search combines Vector search and keyword search for better results
    16:12 Building a local agentic RAG with llama3
    17:48 Running Llama3 model on local machine and using Visual Studio Code
    20:53 Setting up key components for Llama3 performance
    22:20 Creating a complex agentic RAG workflow for document retrieval and answering

  • @scottmiller2591
    @scottmiller2591 22 дня назад +35

    1) The link for the corrective RAG agent had an extra URL attached at the end which caused it to fail; manually tracing the link got me to the proper location
    2) LlamaParse looks like a wonderful tool, since I have a lot of documents with equations, and I really need it to grab equations, if for no other reason than to return them. Unfortunately, LlamaParse requires an API key and seems to send PDFs off for processing, something that others have noted and there is an open issue from 2 weeks ago. As of 3 hours ago, it's still an open issue - clearly most companies don't want to send internal docs out of house. Hopefully this gets resolved soon.
    3) Really liked your presentation - easy to follow every step with the provided materials.

    • @dennou2012
      @dennou2012 20 дней назад +1

      Hopefully we will have more better options for local use - shame it's not a local only pipeline yet

    • @yunxinglu4020
      @yunxinglu4020 18 дней назад +1

      yes - I have found this issue too. LlamaParse seems use OpenAI llm to process the pdf and it leads to the privacy concerns.

  • @fredygerman_
    @fredygerman_ 19 дней назад +4

    You always amaze me by the amount of knowledge I get from your videos

  • @PIOT23
    @PIOT23 21 день назад +1

    What a great video! Thanks for sharing your knowledge

  • @MyAmazingUsername
    @MyAmazingUsername 19 дней назад +1

    Really great tutorial, teaches a lot in very short time! Thanks!

  • @bruinx1679
    @bruinx1679 День назад

    Excellent video! I don't have much experience with RAG and this was sooo helpful!

  • @tkp2843
    @tkp2843 22 дня назад +38

    Firecrawl boosted our RAG accuracy at our company. fast + provided good markdown format.
    Llama parse also super helpful too! Amazing video Jason! This is gold!
    Edit: thanks for the likes :)

  • @seventhapex
    @seventhapex 21 день назад +1

    dude... great video! Thanks for the knowledge!

  • @contractorwolf
    @contractorwolf 4 дня назад

    Jason, I watch a lot of AI videos but I learn the most from yours. I am actually excited everytime i see you have put another one out. Keep up the great work!

  • @dataanalysiscourse785
    @dataanalysiscourse785 22 дня назад +3

    Awesome content!

  • @jasonfinance
    @jasonfinance 22 дня назад +4

    Didn't know about the Agentic RAG techniques, thanks for sharing!! That's definitely a trade off between speed & quality, but good to have the option

  • @beelzebub2808
    @beelzebub2808 6 дней назад +1

    This is extremely helpful! Awesome!

  • @free_thinker4958
    @free_thinker4958 22 дня назад +3

    You're the man 💯👏

  • @user-rj1eu6kp3u
    @user-rj1eu6kp3u 22 дня назад

    right when i needed it, thank you man!
    also, just finished watching and i understood the theory behind it but kinda got lost during the code explanation, i might watching again and again

  • @Max-hj6nq
    @Max-hj6nq 22 дня назад +1

    Solid video Jason

  • @priyankajain1691
    @priyankajain1691 19 дней назад

    Amazing tutorial! Thank you

  • @MrSuntask
    @MrSuntask 22 дня назад

    Great tutorial! Thank you

  • @jorper98
    @jorper98 19 дней назад

    Amazing info shared -. Thank you!

  • @renderwood
    @renderwood 10 дней назад

    Keep this up. This answered to loads of questions I have had previously, and were not answered in any of the HuggingFace tutorials!

  • @gaijinshacho
    @gaijinshacho 22 дня назад +10

    Great timing! Why do you always read my mind JASON!!?! lol

  • @Hash_Boy
    @Hash_Boy 22 дня назад +1

    many many thanks, bro!

  • @Entropy67
    @Entropy67 16 дней назад +1

    Subscribed, dont have an AI company since I'm still a poor student... this video was very informative, the man speaks at two times speed just like my professor. I respect it 😁

  • @jackmermigas9465
    @jackmermigas9465 5 дней назад

    wow nice work thanks!

  • @puzitrajSinghKR
    @puzitrajSinghKR 15 дней назад +2

    Thanks!

  • @liamlarsen9286
    @liamlarsen9286 22 дня назад

    awesome jason thank you

  • @AdahAugustine-fy6xx
    @AdahAugustine-fy6xx 9 дней назад

    Thanks... Awesome video

  • @tunesafari8952
    @tunesafari8952 20 дней назад

    Great video, thanks

  • @rab0309
    @rab0309 22 дня назад +11

    great video keep making these please.. only "criticism" / advice if you can call if that is to keep things focused on local / open source solutions as much as possible.. love the use of Ollama here for example.. things that perhaps don't require API keys, subscriptions, external integrations / dependencies help people like me understand more of what's going on in a workflow like this! thanks again!

  • @Psychopatz
    @Psychopatz 2 дня назад

    This is a great trick, thanks

  • @jaydencollier9339
    @jaydencollier9339 13 дней назад

    I am literally using this technique now in my internship for a project. I went through so many approaches and ended up on my version of this one. Wish you released this video about 2 months ago lol

  • @szpiegzkrainydeszczowcow8476
    @szpiegzkrainydeszczowcow8476 22 дня назад

    You are relevant, Subscribing to your channel!

  • @azathought_games
    @azathought_games 20 дней назад +1

    Such a bait and switch. Thumbnail promises fine tuning tutorial. Delivers best improve-your-RAG video on the internet. Excellent work.

  • @kartiknighania8588
    @kartiknighania8588 21 день назад +1

    OG Jin Yang from Silicon Valley.. Amazing video 🎉

  • @LibertyRecordsFree
    @LibertyRecordsFree 20 дней назад +1

    Amazing lesson! I learned a lot in just 20 min!

  • @MrStevemur
    @MrStevemur 14 дней назад

    Thanks! It's so fascinating how these programs 'think.' Even if I don't install one, concepts like chunking seem to translate to humans as well.

  • @asetkn
    @asetkn 22 дня назад

    Platform agnostic LLM space overview videos from Jason are the best on AI YT

  • @MyWatermelonz
    @MyWatermelonz 22 дня назад +5

    I prefer finetuning to RAG first then RAG on top of the finetuned model. Just a simple QLORA is all you need. It really helps a ton.

    • @helix8847
      @helix8847 21 день назад

      How would you go about doing that, as in just do it backwards from the video?

  • @arianetrek7049
    @arianetrek7049 7 дней назад

    The corrective RAG schema explains why AI often tries to bring results from the web even when you tell them not to in prompt. If it doesn't understand the source properly it will look elsewhere. This was insightful, thank you.

  • @abdallahelra3y118
    @abdallahelra3y118 19 дней назад

    This is epic! keep up...

  • @mathavansg9227
    @mathavansg9227 21 день назад

    Best video💯

  • @ConsultingjoeOnline
    @ConsultingjoeOnline 20 дней назад

    Clicked that BELL too! 🔔

  • @98hghghg98
    @98hghghg98 19 дней назад

    great video jason! quick question, im wondering if a knowledge graph in place of vector database would be better since it mitigates the lost in the middle problem?

  • @mrkubajski9528
    @mrkubajski9528 16 дней назад

    I have to say, it is great :D

  • @Joe-bp5mo
    @Joe-bp5mo 22 дня назад

    This answer a lot of questions why my chat with PDF doesn't work, llama parser & firecrawl looks so freaking good!

  • @jonm6834
    @jonm6834 16 дней назад

    You got a sub. Finally, an AI channel that actually teaches.

  • @freddy29228
    @freddy29228 15 дней назад

    Thanks Jason, great video, this explains RAG pretty well. Subscribed!

  • @EverythinTechnology
    @EverythinTechnology 22 дня назад

    I thought we were gonna fine tune llama3 😢 but the fire crawl implementation looks unreal I’ll have to check that out and add it to my rags.
    I don’t know how well it’ll work for RAGs but people have extended the context window like crazy and still can do the needle in haystack to around 130k.
    If you have 64gb on the Mac you can try out the 256k context window Llama 3 released by Eric Hartford. Would love to see a side by side with both of them using the same embeddings.

  • @PoGGiE06
    @PoGGiE06 3 дня назад

    Great video, thanks. New subscriber (and like) here. I had a couple of questions though: why use langchain? It seems unnecessary from what I have read. Would also love a demo ipynb/copy of code.

  • @FernandoOtt
    @FernandoOtt 22 дня назад

    Awesome content Jason. A Question. I need to create an AI psychologist and store college data, but this college data is a guide of what to speak, not the content itself.
    In that case, what is the best approach, RAG or Fine-tuning?

  • @tonygil8617
    @tonygil8617 21 день назад +1

    Hi brilliant session , do you have a link for the notebook ?

  • @drakouzdrowiciel9237
    @drakouzdrowiciel9237 15 дней назад

    thx

  • @faktogeek
    @faktogeek 22 дня назад +1

    here come dat boi!!!!!!

  • @nrusimha11
    @nrusimha11 13 дней назад

    Thank you. Can you say a little about your hardware setup for this work? This information is missing from a lot of online sources.

  • @Truzian
    @Truzian 22 дня назад +1

    would be great to get a video on best methods for data extraction from these pdfs

  • @mikahundin
    @mikahundin 21 день назад +1

    The speaker in the transcript discusses the use of AI, particularly large language models, in knowledge management. They highlight that AI can provide value in managing vast amounts of documentation and meeting notes, which can be overwhelming for humans to process. The speaker also mentions the potential disruption of traditional search engines like Google by large language models, which can provide hyper-personalized answers based on their extensive knowledge.
    The speaker then introduces the concept of a retrieval augmented generation (RAG) pipeline, which involves extracting information from real data sources, converting them into a vector database, and retrieving relevant information to answer user queries. However, they also note the challenges in building a production-ready RAG application, including dealing with messy real-world data, accurately retrieving relevant information, and handling complex queries that may involve multiple data sources.
    The speaker also discusses various tactics to mitigate these challenges, such as better data preprocessing, optimal chunk size, relevance-based retrieval, and hybrid search methods. They also mention the use of agentic RAG, which utilizes agents' dynamic and reasoning abilities to decide the optimal RAG pipeline and improve the answer quality.
    The speaker concludes by expressing their curiosity about how AI-native startups operate and embed AI into their business processes. They recommend a research document on the subject for those interested.
    In summary, the speaker's points are:
    1. AI, particularly large language models, can provide significant value in knowledge management.
    2. Traditional search engines could potentially be disrupted by large language models.
    3. Retrieval augmented generation (RAG) pipelines can be used to answer user queries based on private knowledge.
    4. Building a production-ready RAG application is complex due to challenges like messy real-world data, accurate retrieval of relevant information, and handling complex queries.
    5. Various tactics can mitigate these challenges, including better data preprocessing, optimal chunk size, relevance-based retrieval, and hybrid search methods.
    6. Agentic RAG can further improve answer quality by utilizing agents' dynamic and reasoning abilities.
    7. The speaker is interested in how AI-native startups operate and embed AI into their business processes, and recommends a research document on the subject.

    • @pithlyx9576
      @pithlyx9576 19 дней назад

      Dead internet thory is getting closer and closer every day

  • @biiiiiimm
    @biiiiiimm 21 день назад

    What about preparing data, for exemple as question / response, the response would be used to generate embedding and the response would be the data retrieved ?

  • @EveDe-ug3zv
    @EveDe-ug3zv 22 дня назад +1

    Great video Jason, I only missed routing as a technique to determine if your question should really go through the RAG. James Briggs has done a few good videos on “semantic routing”.
    Is your example notebook available somewhere?

  • @shimin3356
    @shimin3356 17 дней назад

    Hey Jason thanks for the video, I think it helps a lot. Can I apply on GPT as well?

  • @VipulChaudhary1337
    @VipulChaudhary1337 22 дня назад +2

    Goddamn it Jian Yang

  • @MrLiteratur
    @MrLiteratur 18 дней назад +2

    Thanks, Jason, incredible as always! Would you consider sharing the code from the walkthrough? 🙏

    • @AIJasonZ
      @AIJasonZ  16 дней назад

      Thanks mate, appreciate it! Code is in the description link!

    • @yashsrivastava677
      @yashsrivastava677 10 дней назад

      @@AIJasonZ Link is not there

  • @ex3aliber
    @ex3aliber 22 дня назад

    Amazinnnnggggg🎉🎉🎉🎉

  • @eventsjamaicamobileapp1426
    @eventsjamaicamobileapp1426 19 дней назад

    Great video. How do I add PDF documents and llama_parse to the python notebook?

  • @mikey1836
    @mikey1836 19 дней назад +1

    Interesting. Someone needs to create a wrapper which works out the best way to answer questions / queries, based on the input and question/query. I think intelligence of system could then be increased.

  • @mateuszzemke9194
    @mateuszzemke9194 3 дня назад

    great content! why wouldn't you use groq to speed up the agent response?

  • @RenAok
    @RenAok 14 дней назад

    Very usefull, thank you! Is it posible for the model to retrieve images or graphs from a PDF, or it's only text?

  • @sinasec
    @sinasec 14 дней назад

    Great thanks. Can we get the repo and link to the colab notebook?

  • @shephusted2714
    @shephusted2714 22 дня назад +6

    too many api calls here - do it local with no api calls - better and the model has to be able to crawl more doc formats - people will probably do p2p, real time and uncensored models for 'real' open source ai that has no limiting factors like api calls or tokens - this is where things need to go in order to take off, gain relevance and leverage economies of scale, of course cxl and better i/o will help but those are on the way. real open source ai will hit smb mkt in about 4-5 years and there will be more innovation and discovery - exciting times as we all watch the development curve

  • @sayfeddinehammami6762
    @sayfeddinehammami6762 21 день назад

    Good rag video, the thumbnail taking about "training llama3" is hurting my brain tho

  • @sd5853
    @sd5853 19 дней назад

    I don’t understand everything but I can feel the gold penetrating my ears

  • @junmagic8847
    @junmagic8847 21 день назад

    amazing as always. could you share the notebook please

  • @AbdulMajeed-lf5sq
    @AbdulMajeed-lf5sq 14 дней назад

    I watch lots of AI videos and 99% of them are just a waste of time. As an AI engineer, this channel is hands down the BEST yet
    KEEP UP👏🏼

  • @sharex21
    @sharex21 22 дня назад +1

    I'm a simple man. I see a new AI Jason video, I click.

  • @nuluai
    @nuluai 16 дней назад

    We been trying to build a middleware that connects with any inventory ERP to be able to have real time data information about inventory data for the chatbot

  • @thenickcornelius
    @thenickcornelius 6 дней назад +1

    Came to train my 3 Llamas... Now I'm a full stack developer.

  • @gdr189
    @gdr189 17 дней назад

    Hi, what are the areas current LLMs excel at?
    I am new to this world of AI, but not IT (familiar with infra). It is good that people are trying out things to see what it can do. But my naïve thoughts are that as a language tool, it just looks for patterns of words that appear close together, and knows enough of the formation of language that it produces text that is not only readable, but also relevant. But this surely must have limits, if it does not actually understand?
    Would it be serving up answers from a well vetted and written sources such as internal KMS by using this RAG method? Our team was thinking about it use for education / learning - perhaps tied into custom flashcard and evaluation of human provided answers. Alongside the still very useful text summarisation, alternative wording suggestions.

  • @CecilMerrell
    @CecilMerrell 16 дней назад

    I like using gemini for getting quick up to date answers, and chat gpt for stuff that doesn't require up to date stuff

  • @KouadioJeanCyrilleNgoran
    @KouadioJeanCyrilleNgoran 8 дней назад

    thanks Jason, can i use llama on API and train PDf files in a specify directory train to respond

  • @ConsultingjoeOnline
    @ConsultingjoeOnline 20 дней назад

    Great video. Thanks! A lot of very good tips!

  • @gsprlls
    @gsprlls 22 дня назад

    Curious how this workflow changes with bigger context length. Gradient just released Llama-3 8B with a 1M context length

  • @uptonster
    @uptonster 3 дня назад

    great video! Is there a github location with the code?

  • @mahmood392
    @mahmood392 16 дней назад

    Would you have plans to create a tutorial that connects what ur teaching here and running thing on something like AnythingLLM that allows document reading to create embeddings.

  • @supergaulig
    @supergaulig 2 дня назад

    How would you go about parsing documents of all kinds of types? PDFs, Excels, Word etc...Is there a way to achieve this with only one parser? Or how would you go on about this issue?

  • @Dom-zy1qy
    @Dom-zy1qy 20 дней назад

    4:36 Someone walks into the void and disappears

  • @antsarktis8159
    @antsarktis8159 8 часов назад

    damn real life Jian Yang

  • @user-lw3fs3tl9x
    @user-lw3fs3tl9x 22 дня назад +1

    Is those steps and advices are explained on your website ? It would be amazing if you could share the code 😮

  • @dmy_tro
    @dmy_tro 20 дней назад

    Can we also finetune the 70B model? Even if its not local

  • @henry_room
    @henry_room 21 день назад

    Would there be a way to automate this with Obsidian? I sporadically log everything in Obsidian and it would be amazing to find a way to do this with Obsidian

  • @teapotexorcist
    @teapotexorcist 20 дней назад

    There is a problem with the "Corrective RAG agent" URL in the description.

  • @mayanknagwanshi
    @mayanknagwanshi 16 дней назад

    Damn it Jin yiang 😂

  • @JaimeGuajardo
    @JaimeGuajardo 19 дней назад

    👍👍

  • @morffisTFT
    @morffisTFT 22 дня назад +44

    Can you share the code in the video?

    • @basedmuslimbooks
      @basedmuslimbooks 19 дней назад +5

      I was hoping that was the case since it's a "simple" workflow

    • @pollywops9242
      @pollywops9242 18 дней назад

      The code is personal you need to apply for a download link with meta and it will provide the code to copy / paste

    • @christenjacquottet9799
      @christenjacquottet9799 6 дней назад

      @@pollywops9242 apply where? I don’t see it

    • @joesmoo9254
      @joesmoo9254 2 дня назад

      ​@@pollywops9242😂

  • @FunkyByteAcademy
    @FunkyByteAcademy 16 дней назад

    Fucking dope bra

  • @user-nc8kp5kg5c
    @user-nc8kp5kg5c 19 дней назад

    Can u create end to end custome fine tuning LLM LLAMA with API