RAG + Langchain Python Project: Easy AI/Chat For Your Docs

Поделиться
HTML-код
  • Опубликовано: 22 май 2024
  • Learn how to build a "retrieval augmented generation" (RAG) app with Langchain and OpenAI in Python.
    You can use this to create chat-bots for your documents, books or files. You can also use it to build rich, interactive AI applications that use your data as a source.
    👉 Links
    🔗 Code: github.com/pixegami/langchain...
    📄 (Sample Data) AWS Docs: github.com/awsdocs/aws-lambda...
    📄 (Sample Data) Alice in Wonderland: www.gutenberg.org/ebooks/11
    📚 Chapters
    00:00 What is RAG?
    01:36 Preparing the Data
    05:05 Creating Chroma Database
    06:36 What are Vector Embeddings?
    09:38 Querying for Relevant Data
    12:47 Crafting a Great Response
    16:18 Wrapping Up
    #pixegami #python

Комментарии • 275

  • @colegoddin9034
    @colegoddin9034 4 месяца назад +35

    Easily one of the best explained walk-throughs of LangChain RAG I’ve watched. Keep up the great content!

    • @pixegami
      @pixegami  3 месяца назад

      Thanks! Glad you enjoyed it :)

  • @elijahparis3719
    @elijahparis3719 5 месяцев назад +18

    I never comment on videos, but this was such an in-depth and easy to understand walkthrough! Keep it up!

    • @pixegami
      @pixegami  5 месяцев назад

      Thank you :) I appreciate you commenting, and I'm glad you enjoyed it. Please go build something cool!

  • @gustavojuantorena
    @gustavojuantorena 6 месяцев назад +6

    Your channel is one of the best of RUclips. Thank you. Now I'll go watch the video.

  • @insan2080
    @insan2080 14 дней назад +1

    This is what I look for! Thanks for the simplest explanation. There are some adjustments on the codebase during the updates but it doesn't matter. Keep it up!

    • @pixegami
      @pixegami  11 дней назад

      You're welcome, glad it helped! I try to keep the code accurate, but sometimes I think these libraries update/change really fast. I think I'll need to lock/freeze package versions in future videos so it doesn't drift.

  • @MattSimmonsSysAdmin
    @MattSimmonsSysAdmin 5 месяцев назад +3

    Absolutely epic video. I was able to follow along with no problems by watching the video and following the code. Really tremendous job, thank you so much! Definitely subscribing!

    • @pixegami
      @pixegami  5 месяцев назад

      Thank you for your comment! I'm really glad to hear it was easy to follow - well done! Hope you build some cool stuff with it :)

  • @wtcbd01
    @wtcbd01 2 месяца назад

    Thanks so much for this. Your teaching style is incredible and the subject is well explained.

  • @lalalala99661
    @lalalala99661 18 дней назад +1

    Clean, strucktured, good to follow, tutorial. Thank you for that

    • @pixegami
      @pixegami  11 дней назад

      Thank you! Glad you enjoyed it!

  • @StringOfMusic
    @StringOfMusic 14 дней назад +1

    Fantastic, clear, concise and to the point. thanks so much for your efforts to share your knowledge with others.

    • @pixegami
      @pixegami  11 дней назад

      Thank you, I'm glad you enjoyed it!

  • @jim93m
    @jim93m 3 месяца назад +2

    Thank you, that was a great walk through very easy to understand with a great pace. Please make a video on LangGraph as well.

    • @pixegami
      @pixegami  3 месяца назад

      Thank you! Glad you enjoyed it. Thanks for the LangGraph suggestion. I hadn't noticed that feature before-tech seems to move fast in 2024 :)

  • @geoffhirst5338
    @geoffhirst5338 2 месяца назад

    Great walkthrough, now all thats needed is a revision to cope with the changes to the langchain namespaces.

    • @niklasvilnersson24
      @niklasvilnersson24 Месяц назад

      What changes have ben done, I cant get this to work :-(

  • @michaeldimattia9015
    @michaeldimattia9015 5 месяцев назад +2

    Great video! This was my first exposure to ChromaDB (worked flawlessly on a fairly large corpus of material). Looking forward to experimenting with other language models as well. This is a great stepping stone towards knowledge based expansions for LLMs. Nice work!

    • @pixegami
      @pixegami  5 месяцев назад

      Really glad to hear you got it to work :) Thanks for sharing your experience with it as well - that's the whole reason I make these videos!

  • @gustavstressemann7817
    @gustavstressemann7817 3 месяца назад +2

    Straight to the point. Awesome!

    • @pixegami
      @pixegami  3 месяца назад

      Thanks, I appreciate it!

  • @basicvisual7137
    @basicvisual7137 2 месяца назад +1

    Finally a good langchain video to understand better. Do you have a video in mind to use local llm using Ollama and local embeddings to port the code ?

  • @narendaPS
    @narendaPS Месяц назад +1

    this is the best tutorial i have ever seen on this topic, thank you so much, Keep up the good work. Immediately subscribed.

    • @pixegami
      @pixegami  Месяц назад

      Glad you enjoyed it. Thanks for subscribing!

  • @MrValVet
    @MrValVet 6 месяцев назад +3

    Thank you for this. Looking forward to tutorials on using Assistants API.

    • @pixegami
      @pixegami  6 месяцев назад +1

      You're welcome! And great idea for a new video :)

  • @kwongster
    @kwongster 3 месяца назад +1

    Awesome walkthrough, thanks for making this 🎉

    • @pixegami
      @pixegami  2 месяца назад

      Thank you! Glad you liked it.

  • @mao73a
    @mao73a 20 дней назад +1

    This was so informative and well presented. Exactly what I was looking for. Thank you!

    • @pixegami
      @pixegami  11 дней назад

      You're welcome, glad you liked it!

  • @jasonlucas3772
    @jasonlucas3772 Месяц назад +1

    This was excellent. easy to follow, has codes and very useful! Thank you.

    • @pixegami
      @pixegami  Месяц назад

      Thank you, I really appreciate it!

  • @RZOLTANM
    @RZOLTANM Месяц назад +1

    Really good. Thank very much sir. Articulated perfectly!

    • @pixegami
      @pixegami  Месяц назад

      Thank you! Glad you enjoyed it :)

  • @stevenla2314
    @stevenla2314 6 дней назад

    Love your videos. I was able to follow along and build my own RAG. Can you expand more on this series and explain RAPTOR retrieval and how to implement it?

  • @ahmedamamou7221
    @ahmedamamou7221 Месяц назад +1

    Thanks a lot for this tutorial! Very well explained.

    • @pixegami
      @pixegami  Месяц назад

      Glad it was helpful!

  • @theneumann7
    @theneumann7 2 месяца назад

    Perfectly explained👌🏼

  • @chrisogonas
    @chrisogonas Месяц назад +1

    Well illustrated! Thanks

  • @thatoshebe5505
    @thatoshebe5505 3 месяца назад +1

    Thank you for sharing, this was the info I was looking for

    • @pixegami
      @pixegami  3 месяца назад

      Glad it was helpful!

  • @rikhavthakkar2015
    @rikhavthakkar2015 2 месяца назад +2

    Simple explained and kept an engaging tone.
    I would also look for a use case where the source of vector data is a combination of files (PDF, DOCX, EXCEL etc.) along with some database (RDBMS or File based database)

    • @pixegami
      @pixegami  Месяц назад

      Thanks! That's a good idea too. You can probably achieve that by detecting what type of file you are working with, and then using a different parser (document loader) for that type. Langchain should have custom document loaders for all the most common file types.

  • @erikjohnson9112
    @erikjohnson9112 5 месяцев назад +1

    I too am quite impressed with your videos (this is my 2nd one). I have now subscribed and I bet you'll be growing fast.

    • @pixegami
      @pixegami  5 месяцев назад +1

      Thank you! 🤩

  • @israeabdelbar8994
    @israeabdelbar8994 3 месяца назад +2

    Very helpful video! Keep going, you are the best!
    Thank you very much, I am looking forward to see a video about Virtuel assistant doing actions. By communicating others applications using API.

    • @pixegami
      @pixegami  3 месяца назад

      Glad you enjoyed it! Thanks for the suggestion :)

    • @israeabdelbar8994
      @israeabdelbar8994 2 месяца назад

      You're welcome
      @@pixegami

  • @voulieav
    @voulieav 3 месяца назад +1

    Epic.
    Thank you for sharing this.

  • @bec_Divyansh
    @bec_Divyansh 4 дня назад

    Great Tutorial! thanks

  • @elidumper52
    @elidumper52 2 месяца назад +1

    Super helpful, thank you!

    • @pixegami
      @pixegami  Месяц назад

      Glad it was helpful!

  • @mohanraman
    @mohanraman Месяц назад

    this is an awesome video. Thank You !! ! Am curious how to leverage these technologies with structured data , like business data thats stored in tables. Appreciate any videos about that.

  • @pojomcbooty
    @pojomcbooty Месяц назад +3

    VERY well explained. thank you so much for releasing this level of education on youtube!!

    • @pixegami
      @pixegami  Месяц назад +1

      Glad you enjoyed it!

  • @tinghaowang-ei7kv
    @tinghaowang-ei7kv Месяц назад +1

    Nice,how pretty that is it.

  • @lucasboscatti3584
    @lucasboscatti3584 4 месяца назад +1

    Huge class!!

  • @chandaman95
    @chandaman95 2 месяца назад +1

    Amazing video, thank you.

  • @williammariasoosai1153
    @williammariasoosai1153 3 месяца назад +1

    Very well done! Thanks

    • @pixegami
      @pixegami  3 месяца назад

      Glad you liked it!

  • @shapovalentine
    @shapovalentine 4 месяца назад +1

    Useful, Nice, Thank You 🤩🤩🤩

    • @pixegami
      @pixegami  3 месяца назад

      Glad to hear it was useful!

  • @seankim6080
    @seankim6080 2 месяца назад

    Thanks so much! This is super helpful to better understand RAG. Only the thing is still not sure how to run this program that I clonned from your github repository via windows terminal. Will try on my own but if you could provide any guidance or sources RUclips links anything like that would be much more appreciated.

  • @MartinRodriguez-sx2tf
    @MartinRodriguez-sx2tf 29 дней назад +1

    Muy bueno y esperando el próximo 🎉

  • @jianganghao1857
    @jianganghao1857 15 дней назад +1

    Great tutorial, very clear

    • @pixegami
      @pixegami  11 дней назад

      Glad it was helpful!

  • @aiden9990
    @aiden9990 3 месяца назад +1

    Perfect thank you!

    • @pixegami
      @pixegami  3 месяца назад

      Glad it helped!

  • @serafeiml1041
    @serafeiml1041 Месяц назад +1

    you got a new subscriber. nice work

    • @pixegami
      @pixegami  Месяц назад

      Thank you! Welcome :)

  • @PoGGiE06
    @PoGGiE06 2 месяца назад +3

    Great explanation. Perhaps one criticism would be using open ai’s embedding library: would rather not be locked into their ecosystem and i believe that free alternatives exist that are perfectly good! But would have loved a quick overview there.

    • @pixegami
      @pixegami  Месяц назад +3

      Thanks for the feedback. I generally use OpenAI because I thought it was the easiest API for people to get started with. But actually I've received similar feedback where people just want to use open source (or their own) LLM engines.
      Feedback received, thank you :) Luckily with somehitng like Langchain, swapping out the LLM engine (e.g. the embedding functionality) is usually just a few lines of code.

    • @PoGGiE06
      @PoGGiE06 Месяц назад

      @@pixegami It's a pleasure :).
      Yes, everyone seems to be using OpenAI by default, because everyone is using chatGPT. But there are lots of good reasons why one might not wish to get tied to open AI, anthropic, or any other cloud-based provider besides the mounting costs if one is developing applications using LLM. E.g. data privacy/integrity, simplicity, reproducibility (e.g. chatGPT is always changing and that is out of your control), in addition a general suspicion of non-open-source frameworks whose primary focus is often (usually?) on wealth extraction, not solution provision. There is not enough good material out there on how to create a basic RAG with vector storage using a local LLM, something that is very practical with smaller models e.g. mistral, dolphincoder, Mixtral 8x7b etc., at least for putting together an MVP.
      Re: avoiding openAI:
      I've managed to use embed_model = OllamaEmbeddings(model="nomic-embed-text").
      I still get occasional 'openAI' related errors, but gather that Ollama has support for mimicking openAI now, including a 'fake' openAI key, so am looking into that as a fix.
      ollama.com/blog/windows-preview
      I also gather that with llama-cpp, one can specify model temperature and other configuration options, whereas with Ollama, one is stuck with the configuration used in the modelfile when the Ollama-compatible model is made (if that is the correct terminology). So I may have to investigate that.
      I'm currently using llama-index because I am focused on RAG and don't need the flexibility of langchain.
      Good tutorial in the llama-index docs: docs.llamaindex.ai/en/stable/examples/usecases/10k_sub_question/
      I'm also a bit sceptical that langchain isn't another attempt to 'lock you in' to an ecosystem that can then be monetised e.g. minimaxir.com/2023/07/langchain-problem/. I am still learning, so don't have a real opinion yet. Very exciting stuff! Kind regards.

  • @litttlemooncream5049
    @litttlemooncream5049 2 месяца назад +1

    helpful if I wanna do analysis on properly-organized documents

    • @pixegami
      @pixegami  2 месяца назад

      Yup! I think it could be useful for searching through unorganised documents too.

  • @pampaniyavijay007
    @pampaniyavijay007 18 дней назад +1

    This very simple and useful video for me 🤟🤟🤟

    • @pixegami
      @pixegami  11 дней назад

      Thank you! I'm glad to hear that.

  • @quengelbeard
    @quengelbeard 3 месяца назад +2

    Hi, by far the best video on Langchain - Chroma! :D
    Quick question: How would you update the chroma database if you want to feed it with documents (while avoiding duplication of documents) ?

    • @pixegami
      @pixegami  2 месяца назад

      Glad you liked it! Thank you. If you want to add (modify) the ChromaDB data, you should be able to do that after you've loaded up the DB:
      docs.trychroma.com/usage-guide#adding-data-to-a-collection

  • @matthewlapinta7388
    @matthewlapinta7388 5 дней назад +1

    This video was pure gold. Really grateful for the concise and excellent walkthrough. I have two additional questions in regards to the metadata and resulting chunk reference displayed. Can you return a screenshot of the chunk/document referenced now that models are multimodal? Also a document title or ability to download such document would also be a cool feature. Thanks so much in advance!

    • @pixegami
      @pixegami  11 часов назад +1

      Glad you enjoyed it! I think if you want to display images, or link/share resources via the chunk, you can just embed it at chunk creation time into the document meta-data.
      Upload your resource (e.g. image) to something like Amazon S3, then put a download link into the meta-data for example.

  • @FrancisRodrigues
    @FrancisRodrigues 2 месяца назад +1

    That's the best and most reliable content about LangChain I've ever seen, and it only took 16 minutes.

    • @pixegami
      @pixegami  Месяц назад +1

      Glad you enjoyed it! I try to keep my content short and useful because I know everyone is busy these days :)

    • @Shwapx
      @Shwapx Месяц назад

      @@pixegamihey great work can we have an updated version with the langchain imports because its throwing all kind of errors of imports which are changed

  • @shikharsaxena9989
    @shikharsaxena9989 21 день назад +1

    best explanation of rag

  • @AdandKidda
    @AdandKidda 2 месяца назад

    hi , thanks for such ultimate knowledge sharing .
    I have a use case:
    1. can we perform some action (call an api) as response ?
    2. how can we use mistral and opensource embedding for this purpose?

  • @corbin0dallas
    @corbin0dallas 24 дня назад +1

    Great tutorial, Thanks! My only feedback is that any LLM already knows everything about Alice in wonderland

    • @SongforTin
      @SongforTin 23 дня назад +1

      You can create custom apps for Businesses using their own documents = huge Business opportunity If it really works.

    • @pixegami
      @pixegami  11 дней назад

      Yeah that's a really good point. What I really needed was a data-source that was easy to understand, but would not appear in the base knowledge of any LLM (I've learnt that now for my future videos).

  • @user-iz7wi7rp6l
    @user-iz7wi7rp6l 5 месяцев назад +1

    first thank you very much and now also tell to apply memory of various kinds

    • @pixegami
      @pixegami  5 месяцев назад +1

      Thanks! I haven't looked at how to use the Langchain memory feature yet so I'll have to work on that first :)

    • @user-iz7wi7rp6l
      @user-iz7wi7rp6l 5 месяцев назад +1

      @@pixegami ohk i i have implemented memory and other features also also as well as worked with windows also after some monstor errors,, thank once again for the clear working code (used in production)
      hope to see more in future

  • @bcippitelli
    @bcippitelli 5 месяцев назад +1

    thanks dude!

  • @sunnysk43
    @sunnysk43 6 месяцев назад +3

    Amazing video - directly subscribed to your channel ;-) Can you also provide an example with using your own LLM instead of OpenAI?

    • @pixegami
      @pixegami  6 месяцев назад +1

      Yup! Great question. I'll have to work on that, but in the meantime here's a page with all the LLM supported integrations: python.langchain.com/docs/integrations/llms/

  • @theobelen-halimi2862
    @theobelen-halimi2862 3 месяца назад +2

    Very clear video and tutorial ! Good job ! Just have a question : Is it possible to use Open Source model rather than OpenAI ?

    • @pixegami
      @pixegami  3 месяца назад +1

      Yes! Check out this video on how to use different models other than OpenAI: ruclips.net/video/HxOheqb6QmQ/видео.html
      And here is the official documentation on how to use/implement different LLMs (including your own open source one) python.langchain.com/docs/modules/model_io/llms/

  • @frederikklein1806
    @frederikklein1806 4 месяца назад +1

    This is a really good video, thank you so much! Out of curiosity, why do you use iterm2 as a terminal and how did you set it up to look that cool? 😍

    • @pixegami
      @pixegami  3 месяца назад +1

      I use iTerm2 for videos because it looks and feels familiar for my viewers. When I work on my own, I use warp (my terminal set up and theme explained here: ruclips.net/video/ugwmH_xzkCA/видео.html)
      And if you're using Ubuntu, I have a terminal setup video for that too: ruclips.net/video/UvY5aFHNoEw/видео.html

  • @user-md4pp8nv7u
    @user-md4pp8nv7u Месяц назад +1

    very great!! thanks you

    • @pixegami
      @pixegami  Месяц назад

      Glad you liked it!

  • @nachoeigu
    @nachoeigu Месяц назад +1

    You gained a new subscriber. Thank you, amazing content! Only one question, how about the cost associated with this software? How match it consumes per request?

    • @pixegami
      @pixegami  Месяц назад

      Thank you, welcome! To calculate pricing, it's based on which AI model you use. In this video, we use OpenAI, so check the pricing here: openai.com/pricing
      1 Token ~= 1 Word. So to embed a document with 10,000 words (tokens) with "text-embedding-3-large" ($0.13 per 1M token), it's about $0.0013. Then apply the same calculation to the prompt/response for "gpt-4" or whichever model you use for the chat.

  • @kewalkkarki6284
    @kewalkkarki6284 4 месяца назад +1

    This is Amazing 🙌

    • @pixegami
      @pixegami  4 месяца назад

      Thank you! Glad you liked :)

  • @JJaitley
    @JJaitley 3 месяца назад +1

    @pixegami What are your suggestions on cleaning the company docs before chunking? Some of the challenges faced are how to handle the index pages in multiple pdfs also the headers and footers. You should definitely make some video related to cleaning a pdf before chunking much needed.

    • @pixegami
      @pixegami  3 месяца назад

      That's a tactical question that will vary from doc to doc. It's a great question and a great use-case though for creative problem solving-thanks for the suggestion and video idea.

  • @slipthetrap
    @slipthetrap 4 месяца назад +19

    As others have asked: "Could you show how to do it with an open source LLM?" Also, instead of Markdown (.md) can you show how to use PDFs ? Thanks.

    • @pixegami
      @pixegami  4 месяца назад +8

      Thanks :) It seems to be a popular topic so I've added to my list for my upcoming content.

    • @danishammar.official
      @danishammar.official 2 месяца назад

      If made video on above request kindly give link in description it gonna be a good for all users

    • @raheesahmed56
      @raheesahmed56 2 месяца назад +2

      Instead of md extension you can simply use txt or pdf extension thats it just replace the file extension

    • @yl8908
      @yl8908 Месяц назад

      Yes, pls share how to work with pdfs directly instead of .mds . Thanks !

  • @NahuelD101
    @NahuelD101 5 месяцев назад +2

    Very nice video, what kind of theme do you use to make the vscode look like this? Thanks.

    • @pixegami
      @pixegami  5 месяцев назад +1

      I use Monokai Pro :)

    • @pixegami
      @pixegami  5 месяцев назад +2

      The VSCode theme is called Monokai Pro :)

  • @Chisanloius
    @Chisanloius 20 дней назад +2

    Great level of knowledge and details.
    Please where is your Open AI key stored.

    • @pixegami
      @pixegami  11 дней назад

      Thank you! I normally just store the OpenAI key in the environment variable `OPENAI_API_KEY`. See here for storage and safety tips: help.openai.com/en/articles/5112595-best-practices-for-api-key-safety

  • @naveeng2003
    @naveeng2003 4 месяца назад +2

    How did you rip the aws documentation

  • @yangsong8812
    @yangsong8812 2 месяца назад +1

    Would love to hear your thoughts if hats on how to use evaluation to keep LLM output in check. Can we set up framework so that we can have an evaluation framework?

    • @pixegami
      @pixegami  2 месяца назад

      There's currently a lot of different research and tools on how to evaluate the output - I don't think anyone's figured out the standard yet. But stuff like this is what you'd probably want to look at: cloud.google.com/vertex-ai/generative-ai/docs/models/evaluate-models

  • @vlad910
    @vlad910 4 месяца назад +1

    Thank you for this very instructive video. I am looking at embedding some research documents from sources such as PubMed or Google scholar. Is there a way for the embedding to use website data instead of locally stored text files?

    • @pixegami
      @pixegami  4 месяца назад +1

      Yes, you can basically load any type of text data if you use the appropriate document loader: python.langchain.com/docs/modules/data_connection/document_loaders/
      Text files are an easy example, but there's examples of Wikipedia loaders in there too (python.langchain.com/docs/integrations/document_loaders/). If you don't find what you are looking for, you can implement your own Document loader, and have it get data from anywhere you want.

    • @jessicabull3918
      @jessicabull3918 Месяц назад

      @@pixegami Exactly the question and answer I was looking for, thanks

  • @user-fj4ic9sq8e
    @user-fj4ic9sq8e 2 месяца назад

    Hello,
    thank you so much for this video.
    i have a question related of sumuraze questions in LLM documents.for example in vector database have thousands documents with date property, and i want ask the model how much document i received in the last week?

  • @cindywu3265
    @cindywu3265 2 месяца назад +1

    Thanks for sharing the examples with OpenAI Embedding model. I'm trying to practice using the HuggingFaceEmbeddings because it's free but wanted to check the evaluation metrics - like the apple and orange example you showed. Do you know if it exists by any chance?

    • @pixegami
      @pixegami  2 месяца назад

      Yup, you should be able to override the evaluator (or extend your own) to use whichever embedding system you want: python.langchain.com/docs/guides/evaluation/comparison/custom
      But at the end of the day, if you can already get the embedding, then evaluation is usually just a cosine similarity distance between the two, so it's not too complex if you need to calculate it yourself.

  • @user-wi8ne4qb6u
    @user-wi8ne4qb6u 4 месяца назад +1

    Excellent coding! working wonderful! Appreciate. One question please: what difference if I change from md to pdf?

    • @pixegami
      @pixegami  4 месяца назад

      Thanks, glad you enjoyed it. It should still work fine :) You might just need to use a different "Document Loader" from Langchain: python.langchain.com/docs/modules/data_connection/document_loaders/pdf

  • @ailenrgrimaldi6050
    @ailenrgrimaldi6050 2 месяца назад +1

    Thank you for this video, is NLTK something required to do this?

    • @pixegami
      @pixegami  Месяц назад +1

      The NLTK library? I don't think I had to use it here in the project, a lot of the other libraries might give you all the functionality at a higher abstraction already.

  • @RajAIversion
    @RajAIversion 2 месяца назад

    Nailed it and Easy Understandable, can i make this an chat bot ?
    Anyone please share your thoughts

  • @hoangngbot
    @hoangngbot Месяц назад +1

    Thank you for a great video. What if I already did word embedding and in the future I have some updates for the data?

    • @pixegami
      @pixegami  Месяц назад

      Thanks! I'm working on a video to explain techniques like that. But in a nutshell, you'll need to attach an ID to each document you add to the DB (derived deterministically from your page meta-data) and use that to update entries that change (or get added): docs.trychroma.com/usage-guide#updating-data-in-a-collection

  • @SantiYounger
    @SantiYounger 2 месяца назад +2

    thanks for the video, this looks great, but I tried to implement it and seems like the langchain packages needed are no longer available has anyone had any luck getting this to work?
    Thanks

  • @officialayanvarekar
    @officialayanvarekar 4 дня назад

    Great video! how to use this with local models like llama-8b ?

  • @mlavinb
    @mlavinb 4 месяца назад +1

    Great content! Thanks for sharing.
    Can you suggest a Chat GUI to connect?

    • @pixegami
      @pixegami  3 месяца назад

      If you want a simple, Python based one, try Streamlit (streamlit.io/). I also have a video about it here: ruclips.net/video/D0D4Pa22iG0/видео.html

  • @AjibadeYakub
    @AjibadeYakub 15 дней назад +1

    This is great work, Thank you
    How can I use the result of a sql query or a dataframe, rather than text files

    • @pixegami
      @pixegami  11 дней назад

      Yup, looks like there is a Pandas Dataframe Document loader you can use with Langchain: python.langchain.com/v0.1/docs/integrations/document_loaders/pandas_dataframe/

  • @jimg8296
    @jimg8296 Месяц назад +1

    Thank you SO MUCH! Exactly what I was looking for. Your presentation was easy to understand and very complete. 5 STARS! Not to be greedy, but I'd love to see this running 100% locally.

    • @pixegami
      @pixegami  Месяц назад +2

      Glad it was helpful! Running local LLM apps is something I get asked quite a lot about and so I do actually plan to do a video about it quite soon.

    • @jessicabull3918
      @jessicabull3918 Месяц назад

      @@pixegami Yes please!

  • @FrancisRodrigues
    @FrancisRodrigues 2 месяца назад +1

    pls, I'd like to see a Recommendation model (products, images, etc) based on our different sources, it could be scraping from webpages. Something to use in e-commerce.

    • @pixegami
      @pixegami  Месяц назад

      Product recommendations are a good idea :) Thanks for the suggestion, I'll add it to my list.

  • @fengshi9462
    @fengshi9462 4 месяца назад +1

    hi, your video is so good. I just wanna know,if i want to automatically change my document in the production environment and keep the query service don't stop and always use the latest document as the sources, how can i do this by changing the code?❤

    • @pixegami
      @pixegami  4 месяца назад +1

      Ah, if you change the source document, you actually have to generate a new embedding and add it to the RAG database (the Prisma DB here). So you would have to figure out which piece of document changes, then create a new entry for that into the database. I don't have a code example right now, but it's definitely possible.

  • @xspydazx
    @xspydazx 20 дней назад

    Question : once loading a vector store , how can we output a dataset from the store to be used as a fine tuning object ?

  • @annialevko5771
    @annialevko5771 3 месяца назад +1

    Hey nice video, I was just wondering, whats the difference on doing it like this and using chains? I noticed you didnt use any chain and directly used the predict with the prompt 🤔

    • @pixegami
      @pixegami  3 месяца назад

      With chains, I think you have a little bit more control (especially if you want to do things in a sequence). But since that wasn't the focus of this video, I just did it using `predict()`.

  • @RobbyRobinson1
    @RobbyRobinson1 6 месяцев назад +1

    I was just thinking about this, great work.
    Hypothetically, what if your data sucks? What models can I use to create the documentation? (lol)

    • @pixegami
      @pixegami  6 месяцев назад +1

      Haha that's a topic for another video. But yeah, if the data is not good, then I think that should be your first focus. This RAG technique builds on the assumption that your data is good-and it just adds value on top of that.

  • @uchiha_mishal
    @uchiha_mishal 10 дней назад +1

    Nicely explained but I had to go through a ton of documentation for using this project with AzureOpenAI instead of OpenAI.

    • @pixegami
      @pixegami  10 дней назад

      Thanks! I took a look at Azure Open AI documentation on Langchain and you're right-it doesn't exactly look straightforward: python.langchain.com/v0.1/docs/integrations/llms/azure_openai/

  • @hoangngbot
    @hoangngbot Месяц назад

    I want to hear your thoughts on what approach is likely the better one:
    1. Chop the document into multiple chunks and convert chunks to vectors
    2. Convert the whole document to a vector
    Thank you

    • @pixegami
      @pixegami  28 дней назад

      I think it really depends on your use-case and the content. The best way to know is to have a way to evaluate (test) the results/quality.
      In my own use-cases, I find that a chunk length of around 3000 characters work quite well (you need enough context for the content to make sense). I also like to concatenate some context info into the chunk (like "this is page 5 about XYZ, part of ABC".
      But I haven't done enough research into this to really give a qualified answer. Good luck!

  • @gvtanuja4874
    @gvtanuja4874 4 месяца назад +2

    Great video..but where have you added OpeAI API keys ?

    • @pixegami
      @pixegami  3 месяца назад +1

      You can add them to your environment variable :) I add mine into my .bashrc file.

  • @mohsenghafari7652
    @mohsenghafari7652 Месяц назад +1

    Hi dear friend .
    Thank you for your efforts .
    How to use this tutorial in PDFs at other language (for example Persian )
    What will the subject ?
    I made many efforts and tested different models, but the results in asking questions about pdfs are not good and accurate!
    Thank you for the explanation

    • @pixegami
      @pixegami  28 дней назад

      Thank you for your comment. For good performance in other languages, you'll probably need to find an LLM model that is optimized for that language.
      For Persian, I see this result: huggingface.co/MaralGPT/Maral-7B-alpha-1

  • @marselse
    @marselse 21 день назад +1

    Amazing video! How would I make this into a webapp? Like a support chat bot?

    • @pixegami
      @pixegami  11 дней назад +1

      Thank you! To make it into a web-app, you'd first need to turn this into an API and deploy it somewhere (my next video tutorial will cover that, so stay tuned!). Then you'll need to connect the API to a front-end (webpage or app) to for your users.
      There's a couple of low-code/no-code tools to help you achieve this, but I haven't looked into them in detail yet. Or you can code it all up yourself as well if you are technical.

  • @user-wm2pb3hi7p
    @user-wm2pb3hi7p 2 месяца назад

    how can we make a RAG system which will answer both stuctured and unstructured data.
    for example, user upload a csv and a text file and start asking question, then chatbot has to answer from both database.
    (structured data should store in different database and pass to a tool to process) unstructured should store in the vector database.
    how can we do effectively?

  • @lukashk.1770
    @lukashk.1770 Месяц назад +1

    do these tools work with code also? for example when having a big codebase, and qrying that codebase asking about how xyz is implemeted would be really usefull. Or generating doc etc.

    • @pixegami
      @pixegami  28 дней назад

      I think the idea of a RAG app should definitely work with code.
      But you'll probably need to have an intermediate step to translate that code close to something you'd want to query for first (e.g. translate a function into a text description). Embed the descriptive element, but have it refer to the original code snippet.
      It sounds like a really interesting idea to explore for sure!

  • @spicytuna08
    @spicytuna08 2 месяца назад

    would this work for a general question such as this: please summarize the book in 5 sentences?

  • @henrygagejr.-founderbuildg9199
    @henrygagejr.-founderbuildg9199 3 месяца назад +2

    Great instructions! I read through all the comments.
    How do you get paid? I value the work of others and want to explore an affiliate model that I tested a year ago.
    What is a good way to connect with you and explore possibilities of mutual interest.

    • @pixegami
      @pixegami  3 месяца назад +1

      Thanks for your kind words, but I'm actually doing these videos as a hobby and I already have a full time job so I'm not actually interested in exploring monetisation options right now.

  • @moriztrautmann8231
    @moriztrautmann8231 27 дней назад +1

    Thank you very much for the video. It seems the adding of chunks to the chroma database takes a really long time. If i just save the embeddings to a json its takes a few seconds but the to the chroma it takes like 20 minutes...Ist there somthing i am missing? I am doing this only on a document about one page long.

    • @pixegami
      @pixegami  11 дней назад

      Hey, thanks for commenting!
      It does seem to me something is wrong - the way you're generating embeddings (as a JSON) and via Chroma seems to be doing different things (because they should normally take the same amount of time, if it's for the same amount of text).
      Have you tried using different embedding functions? Or is your ChromaDB saved onto a slower disk drive?

  • @user-cc3ev7de9v
    @user-cc3ev7de9v 2 месяца назад

    which model you are using in this ?

  • @MichaelChenAdventures
    @MichaelChenAdventures 2 месяца назад +1

    does the data have to be in a .md format? Also, how do you prep the data beforehand?

    • @pixegami
      @pixegami  2 месяца назад

      The data can be anything you want. Here's a list of all the Document loaders supported in Langchain (or you can even write your own): python.langchain.com/docs/modules/data_connection/document_loaders/
      The level of preparation is up to you, and it depends on your use case. For example, if you want to split your embeddings by chapters or headers (rather than some length of text), your data format will need a way to surface that.

  • @johnfakes1298
    @johnfakes1298 3 месяца назад +1

    what is the compiler you are using? I am using jupyter notebook but yours looks better

    • @pixegami
      @pixegami  3 месяца назад +1

      If you mean the development environment (editor or IDE), then I'm using VSCode with the Monokai Pro theme.

    • @johnfakes1298
      @johnfakes1298 3 месяца назад

      @@pixegami thank you

  • @canasdruid
    @canasdruid 22 дня назад +1

    What is more advisable if I work with PDF documents, transforming them into text using a library like PyPDFLoader, or transforming them into another format that is easier to read?

    • @pixegami
      @pixegami  11 дней назад

      I haven't done a deep dive on what's the most optimal way to use PDF data yet. I think it really depends on the data in the PDF, and what the chunk outputs look like. You probably need to do a bit of experimentation.
      If you have specific patterns with your PDFs (like lots of tables or columns) I'd probably try to pre-process them somehow first before feeding them into the document loader.

  • @Nouman-es7on
    @Nouman-es7on 2 месяца назад

    Can I upload JSON files updating through Cronjob?

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w 6 месяцев назад +3

    Can we use other LLM besides openAI?

    • @pixegami
      @pixegami  6 месяцев назад

      Absolutely! That's the whole benefit of Langchain - it's LLM agnostic. Here's a list of all the LLM interfaces it supports: python.langchain.com/docs/integrations/llms/

  • @pamr001
    @pamr001 19 дней назад +1

    Which theme do you use in vscode?

    • @pixegami
      @pixegami  11 дней назад

      I use Monokai Pro :)

  • @betagroobox
    @betagroobox 2 месяца назад +1

    Where did you find aws docs in markdown? 😅

    • @pixegami
      @pixegami  Месяц назад

      It used to be in the repo I linked in the comments, but sadly looks like that has been deprecated :( aws.amazon.com/blogs/aws/retiring-the-aws-documentation-on-github/