RAG vs. Fine Tuning

Поделиться
HTML-код
  • Опубликовано: 1 фев 2025

Комментарии • 112

  • @musembi
    @musembi Месяц назад +11

    Cedric Clyburn, you are a very clear communicator. Thanks for this.

  • @yusufersayyem7242
    @yusufersayyem7242 4 месяца назад +29

    35 minutes after downloading the clip, I received a notification, perhaps due to the weak internet in my country.... Finally, I would like to thank you sir for this wonderful explanation

  • @educationrepublic9273
    @educationrepublic9273 4 месяца назад +14

    Love IBM's short and sharp explainers! Thank you for an excellent video once again :)

  • @Kk-ed1gr
    @Kk-ed1gr 4 месяца назад +22

    Thank you for the clarification, I had this question in mind last week, and I am glad that you have provided the answers I need.

  • @MohammadFaizanKhanJ
    @MohammadFaizanKhanJ Месяц назад +3

    This channel continuously producing very good academic videos! Thanks for sharing it free!

  • @phoga6463
    @phoga6463 15 дней назад

    I love the easy to understand explanation while not skimming on details, good job!

  • @FauziFayyad
    @FauziFayyad 4 месяца назад +7

    I have just watched the 1 years ago, then it updated today. Amazingg 🎉

  • @vladmirbc8712
    @vladmirbc8712 14 дней назад +3

    From what I’ve observed in practice, fine-tuning has been useful and necessary in about 1-2 cases out of 10. Firstly, there’s the issue of “catastrophic forgetting.” Secondly, if the model was initially trained with RLHF, fine-tuning will disrupt that as well. As a result, we end up with a significantly weaker model

  • @florentromanet5439
    @florentromanet5439 4 месяца назад +21

    I wanted to scream "WHY NOT BOTH⁉️) until 7:35 😂

    • @jeevan88888
      @jeevan88888 12 дней назад

      It's harder, will cost you more, more errors, tougher to maintain, etc.

  • @NabiehSalim
    @NabiehSalim Месяц назад +1

    Thanks for the video, please we need more (from you).

  • @roberthuff3122
    @roberthuff3122 17 часов назад

    🎯 Key points for quick navigation:
    00:00 *🔍 Introduction to RAG and Fine Tuning*
    - Overview of RAG and fine tuning as methods to enhance large language models,
    - Discussion on the limitations of generative AI and the need for specialization for specific use cases.
    00:54 *📚 Understanding Retrieval Augmented Generation (RAG)*
    - Explanation of how RAG works by retrieving external, up-to-date information to provide accurate responses.
    - Use of a retriever system to pull relevant data and deliver contextualized answers from a large language model.
    03:16 *⚙️ Exploring Fine Tuning*
    - Detailed description of fine tuning as a method to specialize a model for specific tasks and tone.
    - Emphasis on the implications of fine tuning on model performance, output speed, and computational efficiency.
    05:36 *🔄 Comparing Strengths and Weaknesses of RAG and Fine Tuning*
    - Analysis of RAG and fine tuning strengths in relation to data types and required models.
    - Discussion on the contexts in which each method may be more suitable based on application needs.
    07:58 *🤝 Combining RAG and Fine Tuning for Optimal Applications*
    - Presentation of scenarios where a combination of both techniques can enhance model functionality.
    - Examples of how leveraging both methods can build powerful, specialized applications in various industries.
    Made with HARPA AI

  • @leebushen
    @leebushen 22 дня назад

    Great summary well delivered!

  • @sabeensadaf4926
    @sabeensadaf4926 2 месяца назад

    Thanks a lot, that was a good one to understand both RAG and fine tuning.

  • @zeeshan_haidar175
    @zeeshan_haidar175 10 часов назад

    Good job man

  • @awaisahmad5908
    @awaisahmad5908 27 дней назад

    great demonstration. Thank you so much

  • @sandeeppatil5925
    @sandeeppatil5925 3 месяца назад +1

    wonderful explanation ... will love to also know the choice of use from TCO or cost POV

  • @sonjoysengupto
    @sonjoysengupto Месяц назад +1

    Awesome! Thanks a lot.

  • @GrowStackAi
    @GrowStackAi Месяц назад

    Innovation becomes effortless when AI is involved 🔥

  • @MukeshKala
    @MukeshKala 4 месяца назад +4

    Great explanation ❤

  • @mohamadghojavand7870
    @mohamadghojavand7870 4 дня назад +1

    Hi everyone
    Can DeepSeek be used as the LLM in a Retrieval-Augmented Generation (RAG) system? If so, are there any specific considerations or limitations I should be aware of?

  • @RaushanKumar-qb3de
    @RaushanKumar-qb3de 2 месяца назад

    Wow... combination is great. Thanks dear! For information

  • @bharathYerukola-gt7vt
    @bharathYerukola-gt7vt 4 месяца назад +3

    Make a vedio on termonolgioes are often used on ai like benchmark and art of the state and etcc ❤

  • @EmaPython
    @EmaPython 2 месяца назад

    Great video, thanks. It was useful for me

  • @steveyy3567
    @steveyy3567 4 месяца назад

    clear clarificaion, great job!

  • @bharathYerukola-gt7vt
    @bharathYerukola-gt7vt 4 месяца назад +2

    Nice vedio and also make a vedio on neural networks in deep like how neiral network is interlinked with deep learning and machine learning and what is actaully neuarl network and architecuture and why architectute is inporatnt fir neural networks and what is neural network actalkuy like a technique or mathematical expression or anything else so make a vedio on all these

  • @hrshtmlng
    @hrshtmlng 11 дней назад +1

    Can we have RAG vs CAG also?

  • @satwiknag711
    @satwiknag711 Месяц назад +1

    In terms of digital asset allocation which is less resource intensive RAG or Fine tuning? RAG will definitely be easy to market in short time

  • @infotainmentunlimitedbyrohit
    @infotainmentunlimitedbyrohit 4 месяца назад +2

    Thank you 🙏💛

  • @rfflduck
    @rfflduck 4 месяца назад +1

    Great video!

  • @geethikedesilva1784
    @geethikedesilva1784 21 день назад +1

    I love my large MANGUAGE models (LMMs) 😅

  • @andiglazkov4915
    @andiglazkov4915 4 месяца назад +2

    Thanks ☺️

  • @ThiagoCarrati
    @ThiagoCarrati Месяц назад

    Amazing, and how does he writes in reverse way?

  • @rajeshkumar-pz3ym
    @rajeshkumar-pz3ym Месяц назад

    Would be great how long in terms of cost and materializing both approaches and fine tuning the model specific to industry will incur additional training

  • @giovannispillo5176
    @giovannispillo5176 3 месяца назад

    Fantastic Technology for value Great Lesson

  • @shrutisingh9801
    @shrutisingh9801 4 месяца назад +1

    can u make a video about reinforcement learning and performance evaliation of llm models?

  • @GG-uz8us
    @GG-uz8us 4 месяца назад +5

    I would like to see a real app that is in production with RAG and fine-tuning.

  • @apraksh
    @apraksh Месяц назад

    Wonderful!

  • @ravikiran3714
    @ravikiran3714 День назад

    Can Multi-agentic system that has access to the domain knowledge provide the style and domain knowledge and provide the same features as fine tuning - meaning potentially replace finetuning alltogether? without having to update the base model itself?

  • @emont
    @emont 3 месяца назад +2

    your video didn't included EDA, LLM answers based on pre loaded info, a future evolution is LLM answering based on real time information

  • @johannvgeorge8393
    @johannvgeorge8393 4 месяца назад

    Thank you for this helpful video🙂. Could you please explain the implementation of how we can update the RAG system with the latest information?

    • @kalcavaleiro6993
      @kalcavaleiro6993 3 месяца назад

      updating the database?
      cz RAG using such as vector database that updated regularly
      using similarity from the prompt to the database content to retrieve and then augmented the prompt to the LLM

  • @MrRavaging
    @MrRavaging 23 дня назад

    Is it possible to develop a module combining both of these techniques so that the AI Agent consistently fine-tunes the LLMs used in its operation while it operates? Or do we have to choose one or the other?

    • @vincentchanbiz
      @vincentchanbiz 20 дней назад

      Fine tuning LLM can be cost intensive to do it regularly. I suppose a hybrid solution is possible by only fine tune the model once a large amount of new information was released

  • @mark-lq4rk
    @mark-lq4rk 4 месяца назад +1

    Thank you for the fascinating presentation. Assume certain conditions are similar, how would the cost of rag and fine-tuning differ?

    • @IBMTechnology
      @IBMTechnology  4 месяца назад +3

      RAG is generally more cost-efficient than fine-tuning because it limits resource costs by leveraging existing data and eliminating the need for extensive training stages.

    • @scycer
      @scycer 2 месяца назад

      True, but over scale I would assume the excessive context needed to be provided at runtime via RAG to answer questions may outweigh the initial cost in fine tuning no? Like it seems fine tuning is an investment choice in training costs while RAG is an ongoing cost of additional tokens. Obviously in relation to context that can be train ahead of time.

  • @BibekMishra84
    @BibekMishra84 2 месяца назад +2

    I have a question. Is the LLM retrained on the new information during fine-tuning ?

    • @jignareshamwala3401
      @jignareshamwala3401 2 месяца назад +2

      Yes, LLM model is retrained for fine-tuning. For efficient fine-tuning check out Parameter-efficient fine-tuning (PEFT). In PEFT a small set of parameters are trained while preserving most of the large pre trained model’s structure, PEFT saves time and computational resources.

    • @GreatTaiwan
      @GreatTaiwan Месяц назад

      @@jignareshamwala3401this feels like ChatGPT answer 😂

  • @ElaraArale
    @ElaraArale 4 месяца назад +1

    Thank you~!

  • @joeyjiron06
    @joeyjiron06 3 месяца назад

    Love the video! I’m building an app that empowers users to generate landing pages with a prompt using AI. I’m planning on building many custom components/sections that I want the model to use when generating a response. I want the model to choose the right sections to use and fill out the copy in the components to fit the users prompt.
    What would be the best way to handle this in the model? Use rag, fine tuning, both, neither, something else?

  • @davidrivera2946
    @davidrivera2946 3 месяца назад

    Like Developer i was working and create system with RAG patern and its fine, but have problems something with specefic documents, i mean when you play with tons of documents the RAG system get more complex, and you depend in strength way of prompts, im not yet play with fine-tunning but is something near to do for experiment, nice video thanks

  • @youssefsayed4378
    @youssefsayed4378 3 месяца назад +1

    Use Case: if i have a huge online library of books and i need to use llm to answer questions based on these books and research papers i guess we will use RAG but the point is can use it with a really HUGE amount of data (books and PDFs) and what if there multiple answers for the same question but from different resources and each resource has its own opinion which could be in a different direction than the other resource. what will happened?

    • @nickbobrowski
      @nickbobrowski 3 месяца назад +1

      Great use case, Youssef! When you use RAG, it provides the model with multiple snippets of documents from your database. It's important to adjust the chunk size and the number of snippets injected into the context along with the user prompt. Typically, what I do with my clients is start with creating a set of evaluations for the system. These look like example prompts and example outputs. Any change I make to the system - I always run evals to see if the performance improves or gets worse.
      Once we have evals that measure how close the actual outputs are to the target outputs, we can work backwards and optimize the chunk size and number of snippets provided to the LLM. This way, it will get a balanced selection of relevant documents from your database. In some cases, it requires careful engineering to write proper search queries.
      Finally, the way the model writes the final response based on the retrieved information can be steered by instructions and fine-tuning. If you're interested in AI Development, feel free to contact me!

    • @prathamsinghdhankhar5001
      @prathamsinghdhankhar5001 Месяц назад

      Can you please share your email ID or instagram ID in order to get in touch with you.

  • @CalvHobbes
    @CalvHobbes 3 месяца назад

    Thank you, this is very useful. I'm curious about how the volume of data might affect the choice of FT vs RAG. If we tune the model again with the new data, would it become much larger over time? On the other hand, if we use RAG, would the restrictions on context length hold us back (i.e. if we don't want a very expensive model)?

  • @cho7official55
    @cho7official55 4 месяца назад +1

    I thought the retriever was on the far right, and llm in the middle of both, was I wrong, partially, is that schematic representation doesn't fathom all of the architecture, I'd like to go deeper on that matter.

    • @cloudnativecedric
      @cloudnativecedric 4 месяца назад

      There are a lot of variances with the RAG approach that can lead to different architectures, but there's a full video on the IBM Technology channel that dives into RAG as well!

  • @salehmir9205
    @salehmir9205 3 месяца назад

    this is gold

  • @Louic099
    @Louic099 Месяц назад

    Most impressive is your mirrored writing.

  • @Siapanpeteellis
    @Siapanpeteellis 4 месяца назад +2

    What happens to a model when it is fine-tuned? do you use a database for RAG?

    • @cloudnativecedric
      @cloudnativecedric 4 месяца назад +3

      Good question! So with fine-tuning, using an approach like PEFT (Parameter-Efficient Fine-Tuning) which only updates a subset of the full model's parameters, we have new model weights and biases, which could then shared, deployed on a server, etc. for model inferencing with AI-enabled applications. For RAG, yes indeed, the most common method is with a vector database and turning your data into embeddings to search for similarity when using the LLM. But, there's other ways of setting up RAG pipelines too :)

    • @jasonrhtx
      @jasonrhtx 4 месяца назад

      @@cloudnativecedricWhen would it make sense to first use PEFT, then apply RAG? Do both PEFT and RAG assign/label semantic relationships to the texts of user-added corpora and store these in a graph database?

  • @amankushwaha8927
    @amankushwaha8927 23 дня назад

    Gone over my head.

  • @harryli7557
    @harryli7557 4 месяца назад +4

    Large Manguage Model! 2:08

  • @shahraanhussain7465
    @shahraanhussain7465 2 месяца назад

    How would i get to know which model is using RAG in it or Not?

  • @naresha2017
    @naresha2017 Месяц назад

    I want to train my deepseekcoder llm on my source code repo. Should I go for Rag or fine tune the llm? Fine tune - do I need to provide code snippets for different queries?

  • @Criszusep
    @Criszusep 4 месяца назад +40

    Euro 2024 World Championship. Nice... of course the LLM could't give a response 😂

    • @umakrishnamarineni3520
      @umakrishnamarineni3520 4 месяца назад +2

      The RAG isn't updated with new tournament 😂😅

    • @RuiMiguelDaSilvaPinto
      @RuiMiguelDaSilvaPinto 2 месяца назад

      😂

    • @informatiquereseaux7542
      @informatiquereseaux7542 Месяц назад

      Funny, this kind of questions would normally confuse the best LLM ever 😂😂😂😂😂
      Even with this little mistake, the video was great !

  • @fainted_world
    @fainted_world 4 месяца назад +1

    sir can you tell me how to make the vectorstore and store it in a specific file to use it every time.

  • @thankuchari
    @thankuchari Месяц назад

    Could the use of tools from external sources be classified as RAG, i did some searching and the answer is not conclusive?

  • @ab-fj2iq
    @ab-fj2iq 2 месяца назад

    Can open ai support RAG? All i saw it is only FT

  • @hi5wifi-s567
    @hi5wifi-s567 4 месяца назад +1

    Using “Fine Tuning” , then machine ( accounting software) can be a bookkeeper to prepare financial records for …?

    • @cloudnativecedric
      @cloudnativecedric 4 месяца назад

      Just some ideas from the top of my head for fine-tuning with financial records are preparing financial statements, tax preparation (fine tuning on region-specific tax rules and historical data), expense tracking & categorization, etc.

  • @atanasmatev9600
    @atanasmatev9600 4 месяца назад

    Large Language model is "LMM"?

    • @cloudnativecedric
      @cloudnativecedric 4 месяца назад

      Whoops! Good catch, sometimes I mess up when speaking and writing at the same time, it should be “LLM”.

  • @sbz6782
    @sbz6782 Месяц назад +1

    Am I seeing the younger version of the other dude?

  • @memehub2002
    @memehub2002 4 месяца назад +3

    cool

  • @cungthinh1040
    @cungthinh1040 Месяц назад +1

    I feel like reading an explaination from chatgpt, this video not really explain anything. Or i think i just suck at learning

    • @kyleebrahim8061
      @kyleebrahim8061 29 дней назад

      Are you writing down and understanding each section? You need to take your time understanding these concepts and then how they work

  • @RajeshKumar-sz6ef
    @RajeshKumar-sz6ef 3 месяца назад +1

    You did not tell about cost difference :)

  • @ridwanajibari4443
    @ridwanajibari4443 3 месяца назад +2

    so the concept of rag is like you attach file in gpt and asked question based on the attached file. isn’t it?

    • @aravindradhakrishna8660
      @aravindradhakrishna8660 Месяц назад

      I seem to have the same understanding too 😂

    • @ammarparmr
      @ammarparmr Месяц назад

      RAG
      Is take the question from the customer, then find the most relevant chunks of documents related to it. Then pass them both along with the creater instruction to an LLM
      For example a cusimer has a question and the FAQ PDF file is there
      So both will enter an LLM so the cutomer get an accurate answer

    • @Unineil
      @Unineil Месяц назад

      But use a nas.

  • @Killputin777
    @Killputin777 4 дня назад +1

    It is not lmm

  • @Robert-zc8hr
    @Robert-zc8hr 3 месяца назад

    Obviously you need both, duh?
    No, seriously, they are not mutually exclusive. Fine tuning is learning, RAG is gathering requirements for a specific project. An expert needs to do both, he needs to learn in order to specialize, and he needs to be able to gather information for the specific task at hand.

  • @Amelia-o6h3q
    @Amelia-o6h3q Месяц назад

    ty

  • @Datasciencewithsheddy
    @Datasciencewithsheddy 2 месяца назад +1

    You’ll always have to use a combination of both RAG and FT.

    • @scycer
      @scycer 2 месяца назад

      Kinda, we will always have to choose the ideal model for the use case (off the shelf or finer tuned) and what context is provided to the model (rag and other data).
      Really, it's all about context, whether it's engrained in the model or added as part of the prompt.

  • @SandraGarcia-t1k
    @SandraGarcia-t1k 3 месяца назад

    Garcia Michelle Thompson Scott Martinez Ronald

  • @SandraGarcia-t1k
    @SandraGarcia-t1k 4 месяца назад

    Garcia Kimberly Lopez Karen Hall Mark

  • @anshumanpatel9212
    @anshumanpatel9212 2 месяца назад

    Did he just write “LMM”, instead of “LMM”?

    • @scotthill4104
      @scotthill4104 2 месяца назад

      Just the opposite, he wrote "LMM" instead of "LMM".

  • @einjim
    @einjim 4 месяца назад +2

    So, You are all told to wear your watch on your right hand right?!

  • @Great_Mage
    @Great_Mage 22 дня назад

    LMM

  • @JamilaJibril-e8h
    @JamilaJibril-e8h 4 месяца назад

    Uhhh okay i see you .....😂😂😂

  • @SandraGarcia-t1k
    @SandraGarcia-t1k 4 месяца назад

    White Deborah Wilson Susan Garcia Cynthia

  • @Zaid-st6wn
    @Zaid-st6wn 3 месяца назад

    LMM lol

  • @Buxtonphil
    @Buxtonphil 17 дней назад

    How on earth can you stress importance when you pronounce the word impordant