Building a RAG application using open-source models (Asking questions from a PDF using Llama2)

Поделиться
HTML-код
  • Опубликовано: 8 мар 2024
  • GitHub Repository: github.com/svpino/llm
    I teach a live, interactive program that'll help you build production-ready machine learning systems from the ground up. Check it out at www.ml.school.
    Twitter/X: / svpino
  • НаукаНаука

Комментарии • 90

  • @berkbatuhangurhan708
    @berkbatuhangurhan708 2 месяца назад +7

    Came from X, this is an amazing and very detailed walk through. Thanks for explaining even the tiniest bits of everything. Highly recommend this.

  • @GaurangDave
    @GaurangDave Месяц назад +2

    Oh please don't stop creating these videos, this is really helpful. Very detailed and well explained! Thank you so much for this!

  • @TooyAshy-100
    @TooyAshy-100 2 месяца назад +1

    Santiago, your videos on LLMs have been incredibly helpful! Thank you so much for sharing your expertise.
    I'm eager to see more of your content in the future.

  • @anonymoustechnopath1138
    @anonymoustechnopath1138 2 месяца назад +7

    Thanks a lot Santiago!!
    Really needed these videos for LLMs.
    Keep them coming!

  • @QuentinFennessy
    @QuentinFennessy Месяц назад +2

    This is an excellent walk through - easy to follow and very practical

  • @yasirgamieldien
    @yasirgamieldien Месяц назад

    This is an amazing video. Literally answered all the questions I had on building a RAG and it was really useful to see the comparison between GPT, Llama, and Mixtral

  • @liuyan8066
    @liuyan8066 2 месяца назад +1

    I like this fundamental courses, especially the last RAG one, I followed other training to build AI products, some teaching is over 10 hours. After i finished, I still didn't fully understand why I coded like that. Now these courses can make the connection step by step. Thank you.

  • @lokeshsharma4177
    @lokeshsharma4177 18 дней назад

    This is the BEST video ever made comparing all the LLMs performing same task. God Bless You

  • @sarash5061
    @sarash5061 Месяц назад +1

    This was just Amazing, You are a Star. Thanks for all the effort.

  • @SuhasKM-tl1rg
    @SuhasKM-tl1rg 2 месяца назад

    I love your content. More of this in my feed please!

  • @geethikaisurusampath
    @geethikaisurusampath Месяц назад

    This is really Helpful. Specially the explainations behind why do it. Keep up the good work. Respect to you man.

  • @bhusanchettri8594
    @bhusanchettri8594 2 месяца назад +1

    Great piece of work. Well explained!

  • @asifm3520
    @asifm3520 17 дней назад

    That was a really clear explanation. Even novices will have no trouble following along.

  • @sumitrana8114
    @sumitrana8114 2 месяца назад

    Thank you for leaving your job and starting your channel.

  • @swatantrasohni5235
    @swatantrasohni5235 2 месяца назад +1

    Thanks Santiago for wonderful video..running LLM locally is something very handy for variety of task..Eventually everyone will have their own LLM running locally in device..thats the future..

  • @dannysuarez6265
    @dannysuarez6265 22 часа назад

    What a great presentation! Thank you so much, sir!

  • @fredericv3497
    @fredericv3497 9 дней назад

    Really good job and clear tutorial ! Thank you

  • @sushanths.l4865
    @sushanths.l4865 2 месяца назад +5

    This is the great video santiago I really learned a lot

  • @alexstele5315
    @alexstele5315 2 месяца назад

    Thanks a bunch! 🎉 I've been looking for something like that.

  • @RameshBaburbabu
    @RameshBaburbabu 2 месяца назад

    Wow gr8 video, I am able walk with you and finished till the end. "Batch" is gr8 . Thanks please post more videos .. 🙏🙏

  • @adinathdesai6880
    @adinathdesai6880 15 дней назад

    Amazing Video. You added great value to our knowledge. Thank you so much.

  • @noa2427
    @noa2427 Месяц назад +3

    I am running in to vector store problem saying import error docarray which i installed. I tried many ways i tried many vertions of docarray and DocArrayInMemorySearch any helpfull thanks

  • @koko9712
    @koko9712 2 месяца назад +1

    Nice video Santiago ! Keep up the good work

  • @chanukyapekala
    @chanukyapekala Месяц назад

    excellent work! so clear and concise..

  • @alextiger548
    @alextiger548 9 дней назад

    Ma, thanks for what you are doing! Fantastic stuff.

  • @junaidali1853
    @junaidali1853 2 месяца назад

    Lovely. Super useful video. I’ll be building a RAG system with a Vector Database and langchain for my freelance client for around $2,000 or more. Thanks Santiago for helping make my life better.

  • @MarkoKhomytsya
    @MarkoKhomytsya 2 месяца назад +1

    Thank you for the video!
    I found it particularly intriguing to consider the possibility of obtaining more accurate responses from the PDF using the Llama2 model. Given that local Language Models (LMs) tend to be highly sensitive to how queries are formatted, I believe it's crucial to refine your example further. Here are a couple of suggestions:
    1) Instead of relying on a basic parser, it would be beneficial to prepare a set of predefined questions and answers. For instance, a question like "How much does the course cost?" could have a straightforward answer like "$400."
    2) It's also important to determine the optimal format for prompts, specifically tailored for models like Mistral.
    By addressing these points, you could develop a truly functional product that delivers accurate responses. As it stands, most examples seem to demonstrate that local models struggle with practical applications and aren't quite ready for real-world deployment.

    • @underfitted
      @underfitted  2 месяца назад

      Great suggestions!

    • @mehmetbakideniz
      @mehmetbakideniz 2 месяца назад +1

      hi. prompt engineering would definetely solve the problem of verbose answers but do you think it would also correct hallucinations as seen in the video?

    • @MarkoKhomytsya
      @MarkoKhomytsya 2 месяца назад

      good question @@mehmetbakideniz ! I would like to know answer too!

  • @seanb9949
    @seanb9949 2 месяца назад

    Another great video Santiago! I really look forward to seeing more of these. Heck, I'll watch the ads to make sure you get some $$$ 🙏

  • @TheMunishk
    @TheMunishk 2 месяца назад

    Congrats and well done for producing this useful content. Exactly what I was looking to kick start my langchain journey with the models. Let me practice this but I was also looking for how to integrate all this in the front end. Do you have a video of which tools to build a front end for the prompt that will interact with the backend LLMs?

  • @ThamBui-ll7qc
    @ThamBui-ll7qc Месяц назад +1

    Great video, I would love to see how to properly structure the prompt and make the bot remember context as conversation goes on...

  • @fintech1378
    @fintech1378 2 месяца назад +2

    is searching via embedding always better compared to 'traditional' search aka very long context window? where should we use one or the other..how bout if we wanna build multimodal video recommendation system

  • @farukondertr
    @farukondertr Месяц назад

    dude, its awesome! do not stop pls

  • @kergee
    @kergee 15 дней назад

    The lesson was very good, thanks

  • @gonzaloplazag
    @gonzaloplazag 27 дней назад

    Great video! incredibly helpful!!!

  • @user-dg9by2ju2y
    @user-dg9by2ju2y 2 месяца назад

    Very informative video, Santiago!

  • @nevildev
    @nevildev 2 месяца назад

    Thx! Very straightforward

  •  2 месяца назад

    Love your video ! Thanks !

  • @epicfootball007
    @epicfootball007 2 месяца назад

    you are by far the best teacher on youtube regarding ML/AI. please consider launching a course on generative AI.

  • @square007tube
    @square007tube 17 дней назад

    Many Thanks for this video. I walked through the video, I was able to install Ollama3 on my machine, but I have nvidia GPU MX250, which is taking long time to answer the questions. it take 7 mins to answer two questions. I will watch your playlist of LLM.

  • @sumittupe3925
    @sumittupe3925 Месяц назад

    Thanks for the video.....
    Well Explained....!

  • @mehmetbakideniz
    @mehmetbakideniz 2 месяца назад +1

    this was super helpfull. I noticed that using m2 pro some cells took 16 seconds in my laptop while it just took 0.5 second in your computer. then you said you are using m3gpu. How can I make sure that I am using gpu instead of cpu in executing this code? or does langchain already utilize gpu when needed?

  • @fatiga2426
    @fatiga2426 15 дней назад

    Santiago, muy buen video!
    Una pregunta, por que usas un parser para obtener el output del modelo como string? Por que mejor no obtener el content directamente?
    Saludos

  • @malikgaruba4079
    @malikgaruba4079 2 месяца назад

    Awesom video. Thanks

  • @peacefullmusic8374
    @peacefullmusic8374 10 дней назад

    best tutorial for start

  • @samcavalera9489
    @samcavalera9489 2 месяца назад

    Thanks Santiago! I am a student of your ML School course and I haven taken your course in two different cohorts. You ML School course is definitely the best of its kind in the market. Can you please design a new course on RAG that covers everything about this awesome technology including evaluation techniques and deployment? That will be wonderful and I cannot wait to enrol in your RAG (and any other AI) course!

  • @researchpaper7440
    @researchpaper7440 2 месяца назад +1

    Looking for these videos, next i am looking a model to train on SQL data

  • @lindavid1975
    @lindavid1975 9 часов назад

    Thank you Santiago - sorry about the code red.

  • @andresfelipeestradarodrigu301
    @andresfelipeestradarodrigu301 Месяц назад

    AMAZING BRO, THANKS

  • @joeldartez829
    @joeldartez829 2 месяца назад

    Truly, you're the best. I've never met someone who explains things so well.
    I apologize if it is written somewhere and I missed it, but I wanted to ask if I buy your course today, can I have access to the past content today? I don't want to wait until the live sessions in April (or I want to arrive prepared for them). Thank you very much.

    • @underfitted
      @underfitted  2 месяца назад +1

      Yes, you get immediate access to everything from day 1.

  • @MarcosScheeren
    @MarcosScheeren 2 месяца назад

    Came here from X. Great overview on how to implement an LLM+RAG locally. Any multimodal ones incoming?

  • @researchpaper7440
    @researchpaper7440 2 месяца назад

    Amazing channel just a great guy'

  • @TomasTrenor
    @TomasTrenor 16 дней назад

    Amazing video Santiago ! Many thanks . Just tried it with Llama3 8b and, as it seems , is not so accurate as Llama2 ( what does not make sense obviously). I need to deep into it

  • @mrskenz1068
    @mrskenz1068 2 месяца назад +1

    Thanks for the vidéo. How we can do for scientific PDF that contains a lot of mathematical and chemical formulas.

  • @TexttoInvoice
    @TexttoInvoice Месяц назад

    This video is awesome so so great!!! Thank you so much for such a quality video.
    Question: what’s the best way to improve the results being accurate to the document, can using structured data such as spreadsheets and CSV files give you a more accurate answer and maybe the model prefers interacting with them?
    Also, if there was more instances of the data, say multiple different documents, containing the same information that needs to reference ?
    Anyone who has found the best way to optimize getting correct answers from your retrieval. Please let me know! Thanks

  • @sam-uw3gf
    @sam-uw3gf 2 месяца назад

    good video and your tweets are more informative....✌

  • @Jonathan-ru9zl
    @Jonathan-ru9zl Месяц назад

    Great! Can this model and setup serve as an assistant to, lets say, a board design engineer that have thousands of components specs in pdf files?
    To find and analyze the components faster?

  • @nguyenquocviet4287
    @nguyenquocviet4287 2 месяца назад

    Dear Santiago,
    I would like to ask you about the evaluation metric?
    Do you know any evaluation metric for evaluating between the generated answers and true answers? (eg. Rouge metric)
    Thank you so much!

  • @derekottman9622
    @derekottman9622 Месяц назад

    This video is supposed to have a link to ANOTHER "from scratch" video as a popup - but WHEN that link pops up, I think it's ACTUALLY a link within this video pointing in a circular loop BACK TO THIS VIDEO, instead of the "pointing elsewhere" link to the other video it's supposed to be. (This video has a link to itself, if I didn't get MY wires crossed.)

    • @underfitted
      @underfitted  Месяц назад

      I don’t think that’s possible? Anyway, you’ll find the other video here: ruclips.net/video/BrsocJb-fAo/видео.htmlsi=BVJfS_0Iq9lwRX0B

  • @nikkypuvvada2666
    @nikkypuvvada2666 Месяц назад

    Thanks

  • @learningwithmahasin
    @learningwithmahasin Месяц назад

    kindly create another video in which you use Pinecone and also give a GUI making it a complete standalone application

    • @serhiua
      @serhiua 13 дней назад

      Pinecode already explained here ruclips.net/video/BrsocJb-fAo/видео.html&ab_channel=Underfitted

  • @mehmetbakideniz
    @mehmetbakideniz 2 месяца назад

    Thanks!

    • @underfitted
      @underfitted  2 месяца назад

      Thank you so much! Really appreciate you!

  • @user-pb8qi4ht4h
    @user-pb8qi4ht4h 13 дней назад

    Sir how can i do this project using java or spring boot

  • @user-rj1eu6kp3u
    @user-rj1eu6kp3u 2 месяца назад

    has anybody used ollamaembedding and got it working?

  • @Hizar_127
    @Hizar_127 13 дней назад

    i want to deploy it on cloud. does it is possible?

  • @user-uu1ko7oi8z
    @user-uu1ko7oi8z 5 часов назад

    Can I use LLama 3 models with your tutorial?

  • @basantsingh6404
    @basantsingh6404 2 месяца назад

    if you are using open_ai key, it means you are paying to use the open ai model. how is it open source ?

  • @theDrewDag
    @theDrewDag 2 месяца назад +5

    Is it actually true that you need models to be aligned with their respective embeddings? I don't think so 🤔 Embeddings are used only for the vector search and lookup functionality. At the end of the day all the model is seeing is your textual prompt. You can use OpenAI embeddings with any open source models and viceversa.

    • @underfitted
      @underfitted  2 месяца назад +7

      You are right. In this example I only use the embeddings for the search, so what I said is irrelevant here.

  • @learningwithmahasin
    @learningwithmahasin Месяц назад

    kindly convert the same project into a GUI based application

  • @mohamadhasanzeinali3674
    @mohamadhasanzeinali3674 18 дней назад

    good video, but i see a lot of Ads.

  • @samcavalera9489
    @samcavalera9489 2 месяца назад

    Many thanks Santiago!
    Before watching this video, I was mainly using the series of tests from this video for my rag applications:
    ruclips.net/video/nze2ZFj7FCk/видео.htmlsi=-69PI_cJqJn4SgVf
    Now, my life is much simpler than ever 😂
    Thanks hero 🙏

  • @serhiua
    @serhiua 13 дней назад

    Thank you again for the wonderful explanations; they are works of art!
    This was a very good logical continuation of ruclips.net/video/BrsocJb-fAo/видео.html.

  • @fredericv3497
    @fredericv3497 9 дней назад

    Merci !

  • @RameshBaburbabu
    @RameshBaburbabu 2 месяца назад

    Thanks Santiago, for wonderful videos !! , I am still pending to do step by step instructions shown in ruclips.net/video/BrsocJb-fAo/видео.html . wonderful explanation. Thanks , it keep coming and helping us to follow and gain some knowledge..

  • @UditAgarwalBME
    @UditAgarwalBME 10 дней назад

    langchain_community is not working, unable to import it.

    • @UditAgarwalBME
      @UditAgarwalBME 10 дней назад

      worked after pip install langchain-community, thanks