Step-by-Step Guide to Building a RAG LLM App with LLamA2 and LLaMAindex

Поделиться
HTML-код
  • Опубликовано: 20 май 2024
  • In this video we will be creating an advanced RAG LLM app with Meta Llama2 and Llamaindex. We will be using Huggingface API for using the LLama2 model.
    github: github.com/krishnaik06/Llamin...
    ----------------------------------------------------------------------------------------------
    Support me by joining membership so that I can upload these kind of videos
    / @krishnaik06
    ----------------------------------------------------------------------------
    ►Data Science Projects:
    • Now you Can Crack Any ...
    ►Learn In One Tutorials
    Statistics in 6 hours: • Complete Statistics Fo...
    End To End RAG LLM APP Using LlamaIndex And OpenAI- Indexing And Querying Multiple Pdf's
    Machine Learning In 6 Hours: • Complete Machine Learn...
    Deep Learning 5 hours : • Deep Learning Indepth ...
    ►Learn In a Week Playlist
    Statistics: • Live Day 1- Introducti...
    Machine Learning : • Announcing 7 Days Live...
    Deep Learning: • 5 Days Live Deep Learn...
    NLP : • Announcing NLP Live co...
    ---------------------------------------------------------------------------------------------------
    My Recording Gear
    Laptop: amzn.to/4886inY
    Office Desk : amzn.to/48nAWcO
    Camera: amzn.to/3vcEIHS
    Writing Pad: amzn.to/3vcEIHS
    Monitor: amzn.to/3vcEIHS
    Audio Accessories: amzn.to/48nbgxD
    Audio Mic: amzn.to/48nbgxD

Комментарии • 109

  • @aravindsai2843
    @aravindsai2843 3 месяца назад +1

    Much awaited series, thank you krish Sir♥

  • @Pasha_Vamkon
    @Pasha_Vamkon Месяц назад

    Thank you so much for this video!!! Very helpful!!
    I've managed to get a bit of understanding of LLM and to do my lab task!!!

  • @chanishagarwal9103
    @chanishagarwal9103 2 месяца назад

    Thanks you krish for all your hard work. keep making such amazing videos.

  • @GerardoBarcia
    @GerardoBarcia 3 месяца назад +20

    It would be amazing if you show us how to put all of this into production through an API! Thanks for your wonderful work! You Rock!!

    • @timothylenaerts1123
      @timothylenaerts1123 3 месяца назад +1

      vLLM is easy enough to use, they provide a docker image, run that bad boy with whatever model you want and use their openAI endpoint then you can just use that in langchain or Llama index

  • @lixiasong3459
    @lixiasong3459 3 месяца назад

    Thank you, Sir, you are amazing!

  • @atifsaeedkhan9207
    @atifsaeedkhan9207 2 месяца назад

    U r really a good instructor. ❤

  • @rubyrana7786
    @rubyrana7786 2 месяца назад

    Indeed great video. Please try to include the reason for using different approaches of doing any process like in earlier videos the model was loaded differently and here differently. A simple explanation of the reason behind using a specific approach can be useful for the beginners. As the approach changes when we move forward in more complex applications and different use cases.

  • @smartwork7098
    @smartwork7098 Месяц назад +1

    Thanks man, it works well. (After correcting some changes made in llama and huggingface)

    • @ahmadmasood3939
      @ahmadmasood3939 3 дня назад

      I am facing problem while importing huggingfacellm.
      can you tell me what you did?

  • @akshatapalsule2940
    @akshatapalsule2940 3 месяца назад

    Thankyou so much Krish, it was worth the wait :)

  • @MLAlgoTrader
    @MLAlgoTrader 5 дней назад

    You are amazing.

  • @nunoalexandre6408
    @nunoalexandre6408 3 месяца назад

    Love it

  • @amritsubramanian8384
    @amritsubramanian8384 14 дней назад

    gr8 video

  • @ivanrowland142
    @ivanrowland142 3 месяца назад

    Love this. When is the next instalment?

  • @user-fp3tm1jp2f
    @user-fp3tm1jp2f 2 месяца назад +1

    Lets Goooooo

  • @y6bt2501
    @y6bt2501 3 месяца назад +7

    Please make a video on RAG with CSV or database with local open source llm and with memory

  • @charmilagiri4602
    @charmilagiri4602 3 месяца назад +1

    Sir Instead of using LLamA2 model from huggingface, Can we try the quantized llama model? If we use the quantized llama model will the output accuracy varies?

  • @RanjitSingh-rq1qx
    @RanjitSingh-rq1qx 3 месяца назад

    Sir everything fine, but you have missed just one thing in this project, that why you have implemented or build prompt and why you have not used that prompt, why you have gone with default prompt ? And remaining part was so good with good explanation ❤️

    • @krishnaik06
      @krishnaik06  3 месяца назад +1

      More examples i will come up with...this basic to intermediate RAG system

  • @hassubalti7814
    @hassubalti7814 3 месяца назад

    sir greet method to teach us as well as gaining good grip on english. please make video about token used in llama 2 model used

  • @stevefisher35
    @stevefisher35 3 месяца назад +1

    Thanks for the detailed run through, very useful . One question I have is on the two PDF documents you have. Are these available anywhere, just for testing purposes?

    • @sravan160
      @sravan160 2 месяца назад

      i have some doubts in implementing the code.,can u help?

    • @eswararya196
      @eswararya196 19 дней назад

      Yeah ask me?

  • @dillikaextrovert
    @dillikaextrovert 3 месяца назад

    Hello Krish
    The list of accessories you have mentioned, is not having right links for Amazon. Can you please give me the link for the writing pad which you use ?

  • @miteshgarg9420
    @miteshgarg9420 3 месяца назад

    Hey Krish, amazing video again. Can you please help to create a similar solution for custom text 2 sql

  • @MK5491
    @MK5491 3 месяца назад

    @krishnaik06 Sir Thank you for this knowledgeable video, My question is which Evalution model we should use to show the accuracy in terms of answer and context retrieval.
    If possible, will you please create one video on evaluation method for RAG application.

    • @akashchavan3353
      @akashchavan3353 3 месяца назад

      @krishnaik06 sir I also want this thing, can you please create one video on evolution method. Thanks

    • @anant9421
      @anant9421 3 месяца назад

      @krishnaik06 yes please can you please create a video on evaluation method for RAG application

  • @DoomsdayDatabase
    @DoomsdayDatabase Месяц назад +2

    Hi krish sir! they have updated the service_context to settings.llm and i am not able to understand how to implement it into this code.
    Please help!
    Thanks!

  • @atifsaeedkhan9207
    @atifsaeedkhan9207 2 месяца назад +1

    Is it possible to use local llms instead huggingface directly? I have ollama nd lmstudio installed.

  • @yashghugardare519
    @yashghugardare519 3 месяца назад

    Sir..instead of RAG with pdf..make a video on Rag with videos.. which will process videos and be able to answer questions based on the video

  • @GaneshEswar
    @GaneshEswar 2 месяца назад

    waiting for next video, please upload it ...

  • @user-hj9ck9rp4c
    @user-hj9ck9rp4c 3 месяца назад +7

    I am getting the error of VectorStorIndex from LLama_index

    • @ByYouTube2
      @ByYouTube2 2 месяца назад

      use- from llama_index.core import VectorStoreIndex,SimpleDirectoryReader,ServiceContext

  • @tarekferradji
    @tarekferradji Месяц назад +1

    Can I combine the LORA fint tuning for example and a RAG FOR THIS llm, can this work give me very interesting performances?

  • @vinayaktiwari4463
    @vinayaktiwari4463 3 месяца назад

    Hi Krish , which vector store have you utilised here? there was no mention of such in the code

    • @VivekGuptaMusic
      @VivekGuptaMusic 3 месяца назад

      not saving the embeddings in any vector store directly using it

  • @chinnibngrm272
    @chinnibngrm272 3 месяца назад

    Sir as I am a student..
    I am not having gpu in my machine...
    I am not able to do projects with this open source llms n also with open ai...
    Can u please help us to solve the resource errors by using other models

  • @rubyrana7786
    @rubyrana7786 2 месяца назад

    Why did we use a separate Embedding model here while in the earlier video of this playlist we directly used VectorStoreIndex on the documents. So why did we follow different approaches while creating similar Applications? Is it because of the different Model or it is just a different approach and can be done either way ?

    • @nikhilanand9022
      @nikhilanand9022 27 дней назад

      I had one doubt is Vector Store Index use any embedding model behind it for creating the Index or how it create the embedding the vector store index

  • @yuvrajthakur5728
    @yuvrajthakur5728 3 месяца назад

    Hey Krish, why you left PWskills masters in data science course. I joined this because of you. But I am seeing there new tutors. I joined this course because of you only.

  • @chinnibngrm272
    @chinnibngrm272 3 месяца назад

    I am getting Runtimeerror: CUDA Error.... While running
    index= VectorStoreIndex.from_documents (docs, service_context=service_context)
    Sir please provide a solution to run with cpu....

  • @Narutome30
    @Narutome30 3 месяца назад

    Y is he using google colab rather than vs code
    And also please answer this question -> can we use vs code to run seamless m4t meta model

  • @vivekshindeVivekShinde
    @vivekshindeVivekShinde 3 месяца назад +1

    I have lots of pdf documents data and want to create a custom chatbot based on it. Then which one will be better: Langchain or Llamaindex?

    • @RanjitSingh-rq1qx
      @RanjitSingh-rq1qx 3 месяца назад

      Llamaindex for indexing purpose, and langchain used for response of query with Prompt by the langchain LLM, and used Gemini pro as a LLM model. Will be great combination of all these technologies ❤

    • @vivekshindeVivekShinde
      @vivekshindeVivekShinde 3 месяца назад

      @@RanjitSingh-rq1qx Thanks for suggestions. I am looking for Open Source. So while Indexing in the Llamaindex, it doesn't use OpenAI api or something right?

    • @RanjitSingh-rq1qx
      @RanjitSingh-rq1qx 3 месяца назад

      @@vivekshindeVivekShinde yes all are open source

    • @chinnibngrm272
      @chinnibngrm272 3 месяца назад

      Guys can you please share the implementation of this by mixing llama index, Langchain, Gemini pro....
      Please ... It will be very helpful 😊😊

    • @nikhilanand9022
      @nikhilanand9022 27 дней назад

      @@vivekshindeVivekShinde I think when we use the Vector Store index it use the openai embedding model api for creating the index can you please confirm once ?

  • @thatsgame7480
    @thatsgame7480 Месяц назад

    where to get the data from, like you have done in this case?
    \

  • @fadhilayosof5927
    @fadhilayosof5927 3 месяца назад

    can you make a video to create flowchart by LLM

  • @shehrozkhan9563
    @shehrozkhan9563 Месяц назад

    Can we add in conversation history to this app?

  • @MatkoZaja
    @MatkoZaja Месяц назад

    Is there a way to ensure that once PDFs are processed, they do not need to be reprocessed every time the script runs, but rather that a cached database can be stored? Does anyone have code for this?

  • @Arkantosi
    @Arkantosi Месяц назад +2

    Old tutorial. Most of the imports no longer work due to deprications.

  • @Playstore-zc5xk
    @Playstore-zc5xk 3 месяца назад

    How to convert this into end to end?

  • @himanshudeswal9895
    @himanshudeswal9895 2 месяца назад

    Hi, can anyone tell me how to download these raw pdf’s for hands on please??

  • @achukisaini2797
    @achukisaini2797 3 месяца назад +1

    How to reduce hallucination? if answer is not in context then it is hallucinating .

    • @soulfuljourney22
      @soulfuljourney22 3 месяца назад

      may be you can modify the prompt to answer for not in context situation

    • @IamMarcusTurner
      @IamMarcusTurner 3 месяца назад

      literally prompt the LLM if not in document tell the LLM to say it does not know.

    • @riteshsingh811
      @riteshsingh811 2 месяца назад

      If the content is present , still its hallucinating , there are certain advance RAG techniques like Window Sentence Retrieval and Auto Merging Retrieval that can help.It will help in improving the context. Just try read regarding it and implement. It will help u. Also tuning agent to not give answer when it doesn't know helps in case of unknown scenario.

  • @abhinandansharma3983
    @abhinandansharma3983 2 месяца назад

    where i will get this dataset

  • @user-bb5nz6cf5j
    @user-bb5nz6cf5j 2 месяца назад

    Sir , the llama index library is modifying everyday , and there are many import errors in the code , can you tell me the suitable version of llama-index to run the code

    • @ishratsyed2857
      @ishratsyed2857 2 месяца назад

      I was having the same issue, I tried installing version 0.9.40 and it's working now

    • @sebastienmaillet9371
      @sebastienmaillet9371 2 месяца назад

      @@ishratsyed2857 i tried to install llama_index version 0.9.40 but i got the following message:
      ImportError Traceback (most recent call last)
      in ()
      ----> 1 from llama_index import VectorStoreIndex, SimpleDirectoryReader, ServiceContext
      2 from llama_index.llms import HuggingFaceLLM
      3 from llama_index.prompts.prompts import SimpleInputPrompt
      ImportError: cannot import name 'VectorStoreIndex' from 'llama_index' (unknown location)

    • @sebastienmaillet9371
      @sebastienmaillet9371 2 месяца назад

      do you know what i might be missing ?

    • @santhoshmanoharan8969
      @santhoshmanoharan8969 2 месяца назад

      @@sebastienmaillet9371 I have tried the same code from my local anaconda environment, I'm getting error with the importing packages but it is working fine when I use Google Colab, can anyone explain why?

    • @ShreyasR-vr1es
      @ShreyasR-vr1es Месяц назад

      Trying using the import like this instead:
      from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, ServiceContext
      from llama_index.llms.huggingface import HuggingFaceLLM
      from llama_index.core.prompts.prompts import SimpleInputPrompt
      This should work for you!

  • @goldy5553
    @goldy5553 3 месяца назад +1

    Library is pretty messed up, nothing is working everwhere there is a module import error and function is missing or deprecated. if you found this, don't worry guys we are on same page. Sir could you please check if there is a some issues or what they have done to library

    • @eswararya196
      @eswararya196 19 дней назад

      If you are having moldule import error then use
      llama_index.core
      Instead of
      llama_index

  • @harshab2743
    @harshab2743 Месяц назад

    i am getting an error while importing vectorstoreIndex from llamaIndex saying that llamaIndex doesn't exist. can someone help

    • @ShreyasR-vr1es
      @ShreyasR-vr1es Месяц назад +1

      Trying using the import like this instead:
      from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, ServiceContext
      from llama_index.llms.huggingface import HuggingFaceLLM
      from llama_index.core.prompts.prompts import SimpleInputPrompt
      This should work for you!

  • @shreyavalte3077
    @shreyavalte3077 3 месяца назад +9

    I have a very big question

  • @sumanmaity3162
    @sumanmaity3162 Месяц назад

    Hello Krish, I'm getting a basic error as below. Can you please help?
    ImportError Traceback (most recent call last)
    in ()
    ----> 1 from llama_index import VectorStoreIndex,SimpleDirectoryReader,ServiceContext
    2 from llama_index.llms import HuggingFaceLLM
    3 from llama_index.prompts.prompts import SimpleInputPrompt
    ImportError: cannot import name 'VectorStoreIndex' from 'llama_index' (unknown location)

    • @ShreyasR-vr1es
      @ShreyasR-vr1es Месяц назад +1

      Trying using the import like this instead:
      from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, ServiceContext
      from llama_index.llms.huggingface import HuggingFaceLLM
      from llama_index.core.prompts.prompts import SimpleInputPrompt
      This should work for you!

    • @sumanmaity3162
      @sumanmaity3162 Месяц назад

      @@ShreyasR-vr1es Thank you, other 2 worked but I'm getting error in below
      ModuleNotFoundError Traceback (most recent call last)
      in ()
      ----> 1 from llama_index.llms.huggingface import HuggingFaceLLM
      ModuleNotFoundError: No module named 'llama_index.llms.huggingface'

    • @sumanmaity3162
      @sumanmaity3162 Месяц назад

      Please ignore, it worked, had some installation issues. Thank you so much.

    • @darshitshah8668
      @darshitshah8668 6 дней назад

      @@sumanmaity3162 I am facing same error, how did it get resolved for you?

    • @sumanmaity3162
      @sumanmaity3162 5 дней назад

      @@darshitshah8668 Please reinstall, it should work

  • @KumR
    @KumR 3 месяца назад

    Hey Krish. Video is cool. But can you tell us how we will know what are the different things we will need to import . You may have done lot of research. Kindly point us to the source of truth.

  • @user-gk7ox3of4b
    @user-gk7ox3of4b 2 месяца назад

    i need those pdfs

  • @khyathinkadam5524
    @khyathinkadam5524 Месяц назад

    while running
    import torch
    llm = HuggingFaceLLM(
    context_window=4096,
    max_new_tokens=256,
    generate_kwargs={"temperature": 0.0, "do_sample": False},
    system_prompt=system_prompt,
    query_wrapper_prompt=query_wrapper_prompt,
    tokenizer_name="meta-llama/Llama-2-7b-chat-hf",
    model_name="meta-llama/Llama-2-7b-chat-hf",
    device_map="auto",
    # uncomment this if using CUDA to reduce memory usage
    model_kwargs={"torch_dtype": torch.float16 , "load_in_8bit":True}
    )
    in colob i m getting import error stating that i need to install accelarate but i already have in in my env

  • @thomasmuller1521
    @thomasmuller1521 2 месяца назад +3

    The libraries changed:
    from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, ServiceContext
    from llama_index.llms.huggingface import HuggingFaceLLM
    from llama_index.core.prompts.prompts import SimpleInputPrompt
    from langchain.embeddings.huggingface import HuggingFaceEmbeddings
    from langchain.embeddings import HuggingFaceEmbeddings
    from llama_index.embeddings.langchain import LangchainEmbedding
    import llama_index

    • @AmitojSingh-tf9ex
      @AmitojSingh-tf9ex Месяц назад

      bro this line is giving error
      from llama_index.embeddings.langchain import LangchainEmbedding
      how do i find correct one

    • @AmitojSingh-tf9ex
      @AmitojSingh-tf9ex Месяц назад

      how to make this thing running
      from langchain.embeddings.huggingface import HuggingFaceEmbeddings
      from langchain.embeddings import HuggingFaceEmbeddings
      from llama_index.embeddings import LangchainEmbedding
      embed_model=LangchainEmbedding(
      HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2"))

    • @khyathinkadam5524
      @khyathinkadam5524 Месяц назад

      while running
      import torch
      llm = HuggingFaceLLM(
      context_window=4096,
      max_new_tokens=256,
      generate_kwargs={"temperature": 0.0, "do_sample": False},
      system_prompt=system_prompt,
      query_wrapper_prompt=query_wrapper_prompt,
      tokenizer_name="meta-llama/Llama-2-7b-chat-hf",
      model_name="meta-llama/Llama-2-7b-chat-hf",
      device_map="auto",
      # uncomment this if using CUDA to reduce memory usage
      model_kwargs={"torch_dtype": torch.float16 , "load_in_8bit":True}
      )
      in colob i m getting import error stating that i need to install accelarate but i already have in in my env

    • @nikhilanand9022
      @nikhilanand9022 27 дней назад

      I had one doubt is Vector Store Index use any embedding model behind it for creating the Index or how it create the embedding the vector store index

  • @AnandYadav-gv1xw
    @AnandYadav-gv1xw Месяц назад

    from llama_index import VectorStoreIndex, SimpleDirectoryReader, ServiceContext
    ImportError: cannot import name 'VectorStoreIndex' from 'llama_index' (unknown location)

    • @sohammhatre389
      @sohammhatre389 Месяц назад

      Simple directory reader too

    • @ShreyasR-vr1es
      @ShreyasR-vr1es Месяц назад

      Trying using the import like this instead:
      from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, ServiceContext
      from llama_index.llms.huggingface import HuggingFaceLLM
      from llama_index.core.prompts.prompts import SimpleInputPrompt
      This should work for you!

  • @arnavdeshmukh2820
    @arnavdeshmukh2820 Месяц назад

    facing issue index=VectorStoreIndex.from_documents(documents,service_context=service_context) can anyone help

  • @sayanghosh6996
    @sayanghosh6996 Месяц назад

    06:00
    !pip install -q llama-index llama-index-llms-huggingface
    from llama_index.core import VectorStoreIndex,SimpleDirectoryReader,ServiceContext
    from llama_index.llms.huggingface import HuggingFaceLLM
    from llama_index.core.prompts.prompts import SimpleInputPrompt