Hugging Face LLMs with SageMaker + RAG with Pinecone

Поделиться
HTML-код
  • Опубликовано: 7 сен 2024

Комментарии • 31

  • @jamesbriggs
    @jamesbriggs  Год назад +2

    👋🏼 Check out the article version of the video here:
    www.pinecone.io/learn/sagemaker-rag/

  • @mr.daniish
    @mr.daniish Год назад +4

    James can teach a 9 year old what a RAG is!

  • @noneofyourbusiness8625
    @noneofyourbusiness8625 Год назад +1

    This channel provides so much valuable information for free and I really appreciate it!

  • @Yikina7
    @Yikina7 6 месяцев назад

    Amazing video, thank you very much! It's obvious there was a lot of work involved to make it in such a well structure way. Very easy to follow, you know how to teach :)

  • @shashwatkumar5556
    @shashwatkumar5556 11 месяцев назад

    I want to thank you for this walkthrough. This was very informative. And I know it must have taken quite a lot of time and effort to make it. So thank you!!

  • @RezaA
    @RezaA Год назад +1

    Thank you for the well described demo. The recommended vector db for this stack is probably opensearch which does the same as pinecone but you have more control and you own it and its more expensive.

    • @jamesbriggs
      @jamesbriggs  Год назад +1

      meh, opensearch doesn't scale beyond 1M vecs well and their vec search implementation is nothing special - if you want open source I'd recommend qdrant (also rust like Pinecone) or weaviate

    • @arikupe2
      @arikupe2 Год назад

      @jamesbriggs Thanks for the video James! I was wondering what issues you've experienced with scaling OpenSearch? We're considering it for our large-scale business use case and had thought it would be a good fit for larger-scale use

  • @sandeeprawat4981
    @sandeeprawat4981 11 месяцев назад

    Thank you so much.. really appreciate...love from India

  • @SolidBuildersInc
    @SolidBuildersInc 3 месяца назад

    Thank you for your presentation. I clicked the Subscribe button, although I didn't delve into the video content. During your talk, I recall you mentioning the open-source LLM and discussing AWS pricing. This led me to prioritize a cost-effective solution that allows for scalability. Have you considered running an ollama model locally and setting up a tunnel with a port endpoint for a public URL? I appreciate any feedback you can provide." 😊

  • @e_hossam96
    @e_hossam96 9 месяцев назад

    Thank you for your great effort 🤗

  • @energyexecs
    @energyexecs 6 месяцев назад

    James -Great video and I like how you referred by to your flow chart diagram. My task is I am working on the "Corppus" of publicly available engineer technical standards documents that are only available in PDF or Word documents. I want to encode the words (tokens) in those document into a vector database and then take through LLM Bing GPT Transformation Architecture and then using RAG to focus only on the tokens (words) for that "corpus" of engineering standards. Why? This because right now I do a “Control F Search” which takes forever with my clients to find the standards which includes both words and diagrams, pictures (different modality) -- so instead of spending hours on "Control F" my plan is to convert those documents to the vector database and enable a "generative search" in "natural language" instead of "Control F search". Does this make sense? Your video is giving me the pathway to success.

  • @megamehdi89
    @megamehdi89 Год назад

    awesome content, thank you so much. very good explanation. i love watching your videos. i try to follow them and learn 😊

  • @VenkatesanVenkat-fd4hg
    @VenkatesanVenkat-fd4hg Год назад

    Thanks for your valuable videos as always. Can you discuss fine tuning llama 2 7b or 13b using dataset & deploy in sagemaker.....

  • @energyexecs
    @energyexecs 6 месяцев назад

    Great video and I like how you referred to your flow chart diagram. I am working on the "Corpus" of publicly available engineer technical standards documents that are only available in PDF or Word documents. I want to encode the words (tokens) in those document into a vector database and then take through LLM Bing GPT Transformation Architecture and then using RAG to focus only on the tokens (words) for that "corpus" of engineering standards. Why? This because right now I do a “Control F Search” which takes forever with my clients to find the standards which includes both words and diagrams, pictures (different modality) -- so instead of spending hours on "Control F" my plan is to convert those documents to the vector database and enable a "generative search" in "natural language" instead of "Control F search". Does this make sense? Your video is giving me the pathway to success.

  • @shalabhgarg8225
    @shalabhgarg8225 Год назад

    Well just too good

  • @VaibhavPatil-rx7pc
    @VaibhavPatil-rx7pc Год назад

    Excellent

  • @barkingchicken
    @barkingchicken Год назад

    Great video

  • @user-yu4kt5ie4r
    @user-yu4kt5ie4r Год назад

    will you be a video on deployment? Great video btw.

  • @serkansandkcoglu3048
    @serkansandkcoglu3048 10 месяцев назад

    Thank you! this is very informative! when we put our embeddings into pinecone vector db, is our data going outside? I would be ok to push my sensitive data to aws s3 bucket, but where does that pinecone db resides in?

  • @sergioquintero4624
    @sergioquintero4624 10 месяцев назад

    @jamesbriggs Hi james, thank you for the amazing video, I have a question.. it's possible to deploy models (embedding and LLM) in the same endpoint ? Just for save monye considering that in the RAG pipelines the embedding step and the retrieval are sequencial steps

  • @AaronChan-x2d
    @AaronChan-x2d Месяц назад

    You need to define your llm in step 2 of asking the model directly....
    llm = HuggingFacePredictor(
    endpoint_name="flan-t5-demo" # Use the name of your deployed endpoint
    )

  • @riyaz8072
    @riyaz8072 9 месяцев назад

    how to create vector vector database for pdf documents ?

  • @pantherg4236
    @pantherg4236 Год назад

    What is the best way to learn deep learning fundamentals via implementation (let's say pick a trivial problem of build a recommendation system for movies) using pytorch in Aug 26, 2023?

  • @brianrowe1152
    @brianrowe1152 Год назад

    Neat but why? Is sagemaker just langchain hosted at Aws?

    • @jamesbriggs
      @jamesbriggs  Год назад +1

      no it's more like Colab + ML infra, you can also use langchain with sagemaker - the why is for the infra component, hosting open source LLMs is super easy

  • @rociotesla
    @rociotesla 3 месяца назад

    tu código no corre una mierda bro

  • @sndrstpnv8419
    @sndrstpnv8419 5 месяцев назад

    you use in article wrong LLM 'HF_MODEL_ID':'meta-llama/Llama-2-7b' but it suppose to be MiniLM