Chat with Documents is Now Crazy Fast thanks to Groq API and Streamlit

Поделиться
HTML-код
  • Опубликовано: 25 июл 2024
  • Learn how to build an RAG pipeline with the world's fastest LLM API via Groq API. We will build an RAG application that will enable you to chat with a website and will wrap everything in a Streamlit App.
    🦾 Discord: / discord
    ☕ Buy me a Coffee: ko-fi.com/promptengineering
    |🔴 Patreon: / promptengineering
    💼Consulting: calendly.com/engineerprompt/c...
    📧 Business Contact: engineerprompt@gmail.com
    Become Member: tinyurl.com/y5h28s6h
    Signup for Advanced RAG:
    tally.so/r/3y9bb0
    LINKS:
    Getting Started with Groq: • Getting Started with G...
    How to chunk: • LangChain: How to Prop...
    Code: github.com/PromtEngineer/Yout...
    TIMESTAMPS:
    [00:00] Introduction
    [00:39] Setting Up: Installing Packages and Importing Libraries
    [01:32] Designing the RAC Pipeline: From Data to Response
    [02075] Implementing the RAC Pipeline with Groq API
    [06:00] RAG with Streamlit and Groq API
    [10:07] Streamlit App in Action: Real-Time Responses
    All Interesting Videos:
    Everything LangChain: • LangChain
    Everything LLM: • Large Language Models
    Everything Midjourney: • MidJourney Tutorials
    AI Image Generation: • AI Image Generation Tu...
  • НаукаНаука

Комментарии • 45

  • @engineerprompt
    @engineerprompt  Месяц назад

    If you are interested in learning more about how to build robust RAG applications, check out this course: prompt-s-site.thinkific.com/courses/rag

  • @GroqInc
    @GroqInc 4 месяца назад +13

    Excellent demo, thank you for choosing Groq.

  • @2vadlamani
    @2vadlamani 4 месяца назад

    Really fast :) Thanks for the video

  • @RickySupriyadi
    @RickySupriyadi 4 месяца назад +2

    your content never disappointing, love it!

  • @chyldstudios
    @chyldstudios 4 месяца назад

    Excellent demo.

  • @engineerprompt
    @engineerprompt  4 месяца назад +6

    If you are interested in leanring more about Advanced RAG techniques, signup here: tally.so/r/3y9bb0

    • @samcavalera9489
      @samcavalera9489 4 месяца назад +1

      I already signed up! Can't wait to start learning advanced RAG techniques 😀

  • @hadi-yeg
    @hadi-yeg 4 месяца назад +6

    I thought you're going to show how Crazy FAST the RAG system will be set up (which is at the startup of your streamlit app)! BUT You're actually showing the response time from the LLM which obviously is fast when you call the API.

  • @user-cq7iu4ws6q
    @user-cq7iu4ws6q 4 месяца назад +1

    thank you very much! How to use without in-memory vector store such AWS opensearch or Pinecone? I have alot of documents to search in

  • @zubinbalsara8414
    @zubinbalsara8414 4 месяца назад

    Can you please give an example on how to do reranking? Your style of teaching is just absolutely fantastic.

  • @sayyedraza1895
    @sayyedraza1895 4 месяца назад +2

    Thanks bro 🙏🙏

  • @limjuroy7078
    @limjuroy7078 4 месяца назад

    So, we can speed up the response from the local LLM by using GROQ? Also, would creating embeddings for text chunks be faster also?

  • @stanTrX
    @stanTrX 2 месяца назад

    thanks. how can i rag for my document such as pdf, instead of web site as in your example?

  • @cynthiarohr8560
    @cynthiarohr8560 4 месяца назад

    Is it possible for you to create a video which also has Deepgram so now it becomes a conversational AI?

  • @yusufersayyem7242
    @yusufersayyem7242 4 месяца назад

    Great work Sir 🌟🌟🌟 But I have a question How can I add more than one links and also add PDF files...??!

  • @MikewasG
    @MikewasG 4 месяца назад +2

    Thank you very much for your efforts. Your videos have been incredibly helpful to me! I have a question: In my experience, RAG's performance in extracting information from tables or images in PDFs is quite poor. Is there any way to improve this?

    • @engineerprompt
      @engineerprompt  4 месяца назад +2

      Look into unstructured io for correctly parsing tables. Llamaindex also released a new tool called llama-parse for parsing tables. You might want to explore that as well.

  • @r0f115L4m
    @r0f115L4m 4 месяца назад +2

    What app did you use to create your flow diagram? Thank you so much for these videos, learn a lot from them!

  • @felipesanchez5823
    @felipesanchez5823 4 месяца назад

    ¡Gracias!

  • @ubaisalih2987
    @ubaisalih2987 4 месяца назад +2

    this is really awsome , and it will be great if you can deploy it on hugging face or any other suggested platform , eventually we need to deploy the app not only on local machine

    • @scitechtalktv9742
      @scitechtalktv9742 4 месяца назад +1

      I agree with that, I would like to deploy this on Hugging Face Spaces. Is the free version enough for this, or is a paid version necessary?

    • @ubaisalih2987
      @ubaisalih2987 4 месяца назад

      @@scitechtalktv9742 i think using small llm can be fit on the free version

  • @uwegenosdude
    @uwegenosdude 3 месяца назад

    Thanks for the very intersting video. I tried to run your example on window 11. Unfortunalety I get an error when trying to use FAISS. How do I have to run the faiss server ? My error looks like so:
    Error raised by inference endpoint: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url
    I changed one line to embeddings = OllamaEmbeddings(model="llama2:7b")
    Then calling vector = FAISS.from_documents(documents, embeddings) seems to work (took a couple of minutes !!)
    So I think I had no problem with FAISS but instead the default value for model "llama:7b" was not correct.
    But another question: What has mixtral.... to do with llama2? Is it the same?

  • @stanTrX
    @stanTrX 2 месяца назад

    why do you use ollama , llama2 embedding model? instead of something else like nomic-embed?

    • @engineerprompt
      @engineerprompt  2 месяца назад +1

      there are way too many options. This is to just show what is possible :)

  • @ABHINAVKUMAR-tu4ry
    @ABHINAVKUMAR-tu4ry Месяц назад

    i want to make and deploy this type of application, but for this i have to run ollama in background, is there any other way, any one can help me

  • @scitechtalktv9742
    @scitechtalktv9742 4 месяца назад +1

    I would like to store / serialize the vector store / embeddings because on my PC it takes a very long time to generate those! I mean extremely long: more than 4 hours! How can I do that?

    • @engineerprompt
      @engineerprompt  4 месяца назад +1

      You can use external API for doing the same. Hugging face offers free embedding APIs.

  • @kate-pt2ny
    @kate-pt2ny 4 месяца назад +1

    1. After running the program, it is required to install llama2. There are many other ollama models on my computer, but it seems that the program uses llama2 by default:
    2. The initial run takes a long time (the URL in the example takes about 3 minutes, M1 max 32G). After the vector is completed, the search is very fast:
    3. Provide news links, which can be quickly parsed and searched to obtain answers. RAG has a good effect;
    4. The RAG effect of arxiv html papers is poor
    Thank you for sharing

    • @engineerprompt
      @engineerprompt  4 месяца назад

      You can use any other embedding model in it. This example was using the llama2 embedding in ollama.

  • @fabriziocasula
    @fabriziocasula 4 месяца назад +1

    is it possibile to read pdf?

    • @engineerprompt
      @engineerprompt  4 месяца назад +1

      Yes, look at my langchain tutorials.

  • @BrandonLee-ik8kw
    @BrandonLee-ik8kw 4 месяца назад +1

    The notebook doesn't work. I get an error with ValueError: Error raised by inference endpoint: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/embeddings (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused'))

    • @engineerprompt
      @engineerprompt  4 месяца назад

      Do you have ollama running?

    • @qurious474
      @qurious474 Месяц назад

      @@engineerprompt what is this

  • @tofipie3432
    @tofipie3432 4 месяца назад

    thank you very much! How to use without in-memory vector store such AWS opensearch or Pinecone? I have alot of documents to search in