Chat with Multiple PDFs | LangChain App Tutorial in Python (Free LLMs and Embeddings)

Поделиться
HTML-код
  • Опубликовано: 23 ноя 2024

Комментарии • 1 тыс.

  • @alejandro_ao
    @alejandro_ao  28 дней назад +1

    🔥Join the AI Engineer Bootcamp:
    - Learn with step-by-step lessons and exercises
    - Join a community of like-minded and amazing people from all over the world
    - I'll be there to personally answer all your questions 🤓
    - The spots are limited since I'll be directly interacting with you
    You can join the waitlist now 👉 course.alejandro-ao.com/
    Cheers!

  • @pathmonkofficial
    @pathmonkofficial Год назад +52

    The use of Huggingface language models takes this to another level, enhancing performance and functionality. The tutorial's step-by-step approach to setting up LangChain and building the chatbot application is truly valuable.

    • @alejandro_ao
      @alejandro_ao  Год назад +19

      you are truly valuable

    • @kaushikas4764
      @kaushikas4764 9 месяцев назад

      What huggingface model is he using here?

    • @maximus3159
      @maximus3159 8 месяцев назад +6

      This comment sounds suspiciously AI generated

    • @mrudulasawant4677
      @mrudulasawant4677 4 месяца назад

      @@alejandro_ao can we use python 3.10?

    • @chilldom.
      @chilldom. 2 месяца назад

      @@alejandro_ao i cannot thank you enough for this. Love from Ethiopia❤❤

  • @Pramesh37
    @Pramesh37 8 месяцев назад +15

    Mate, you're a legend. I was searching for tutorials on Langchain framework, HuggingFace, LLM and Embeddings to understand the concept. But this one practical implementation gave me the entire package. Great pace, clear explanation of concepts, overall amazing tutorial. You are a gifted teacher and I hope you continue to teach such rare topics. Earned yourself a subscriber, looking forward to more such videos.

    • @xspydazx
      @xspydazx 7 месяцев назад

      in reality we should not be using any form of cloud AI systems unless they are FREE !! Thats Point 1...
      But also we should be focussing on Hugging face models !
      All tasks can be performed with any model !
      Even the embeddings can be extracted from the model ! so no need for externeal embeddings providers , the embedding used should ALWAYS be from the model , the rag can be added to the tokenized prompt and injected as the content , so pre-Tokenized datasets are useful, reducing the search time and rag speed for local systems : (we cannot be held to ransom using the intenet as a back bone for everything and making these people richer each day !"
      the services provided by a vector store are easily created in python without third artly librarys, but any library which is complely open source and local is perfect !
      in fact we should be looking at our AI_researchers , to fill our rag based on our expectations and after examining and filtering it shouldl be able to be , extracted up to the llm ! (fine tuned in as talkng to the llm DOES NOT TEACH IT!)

    • @HelloIamLauraa
      @HelloIamLauraa 12 дней назад

      hii:). which HF model are u using?

  • @langmod
    @langmod Год назад +30

    Perfectly executed tutorial. Definitely worth a coffee. If you are taking suggestions, I'd be interested in a tutorial (or just exploring potential solutions) on comparing content between two documents; or more specifically answering questions about changes/updates between document versions and revisions. There are many situations where changes are made to a document (e.g. new edition of a book; documentation for python 2 vs 3; codebase turnover; etc.), and while 'diff' can show you exactly what changed in excruciating detail, it would be nice to have an LLM copilot that can answer semantic questions about doc changes. For example a bioinformatics professor might want to know how they should update their course curriculum as they transition from edition 3 to edition 4 of a textbook (e.g. Ch4 content has been moved to Ch5 to make room for a new Ch4 on advances in gene editing; Ch7 has major revisions on protein folding models).

    • @alejandro_ao
      @alejandro_ao  10 месяцев назад +5

      hey there! sorry for the late reply, this is a great idea! i started recording videos again a couple weeks ago and they are going up soon. this is totally something that could be very useful to a lot of people. i will look into that! and thanks for the coffee, you are amazing!!

  • @AdegbengaAgoroCrenet
    @AdegbengaAgoroCrenet Год назад +20

    I rarely comment on YT videos and I must say your sequencing and delivery of this content is really good. Its informative, clear, concise and straight to the point. No fluff or hype, just good and quality content with exceptional delivery. I couldn't help but subscribe to your channel and smash the like button. I have seen alot of videos about this and they don't deliver the kind of value you have

    • @alejandro_ao
      @alejandro_ao  10 месяцев назад

      thank you man, it mean's a lot!

  • @speerunscompared
    @speerunscompared Год назад +19

    This tutorial is excellent. It's nice that you also explained some of the smaller details, like the environment variable setup, and how this works with git.

  • @erniea5843
    @erniea5843 Год назад +19

    Well done! That overview diagram is very helpful and I appreciate that you referred back to it often. Too often tutorial videos neglect the system overview aspects but you made it easy to see how it all fit together

  • @sandorkonya
    @sandorkonya Год назад +23

    Nice project! Since langchain's pdf reader saves the page as metadata, if you ask something, the results (the pages of the pdf) could be shown in an embeded /canvas next to the chat. This way one could see the relevant pages of the corresponding PDFs, not just the straight answer.

    • @maxbodley6452
      @maxbodley6452 Год назад +14

      Yeah that sounds like a great idea. Do you know how you would go about doing that?

    • @kaiserchief500
      @kaiserchief500 Год назад

      @@maxbodley6452 have you got some information of how that works?

    • @xt3708
      @xt3708 Год назад

      bump

    • @oleum5589
      @oleum5589 Год назад

      how would you do this

    • @sandorkonya
      @sandorkonya Год назад

      @@oleum5589 langchain.document_loaders.pdf.PyPDFLoader --> Loader also stores page numbers in metadata.

  • @sahiljamadar7324
    @sahiljamadar7324 8 месяцев назад

    I was interested in taking a taste in LLM and this video just fulfilled my taste. I completed this project and it works fine and gave me a lot of learning about the vectorstore, the LLM itself which very much appreciated. THANKS ALOT MAN!!!

  • @shivamroy1775
    @shivamroy1775 Год назад +48

    Great quality content. I absolutely love that you took the time to explain everything in such great detail and walk us through the coding process, Unlike on RUclips few other video compromise explainability and knowledge for pace. Please keep up the good work. Also, the explanation of the system diagram of the application was by far the best explanation I have ever seen.

    • @WildFire49
      @WildFire49 Год назад

      is your project working? when i process my pdfs it is not getting converted into chunks, What should i do?

    • @martinkrueger937
      @martinkrueger937 Год назад

      anyone knows how to use Azure instead of Open ai?

    • @MachineLearningZuu
      @MachineLearningZuu Год назад

      Yes I am using. What is the issue ? @@martinkrueger937

    • @mrudulasawant4677
      @mrudulasawant4677 4 месяца назад

      can we use python 3.10?

  • @svenst
    @svenst Год назад +38

    Hey, thanks for this tutorial. Small hint: it’s recommended to use pypdf instead of PyPDF2, since this branch was merged back into pypdf. ;-)

  • @fishbyte
    @fishbyte Год назад +9

    Hi Alejandro, thank you for making the series of Langchain tutorials. I have learned a lots! I wonder if you could show us how to ask a question over multiple uploaded files with different formats (e.g., PDFs + csv files).

    • @francoislepron2301
      @francoislepron2301 Год назад

      This would be really helpful. Do you think that such a tool set is able to recognize the fields in an invoice, such as the provider, the date, the invoice reference, and the amounts and quantities for each article, the total price, and after we can query the tool for all invoices received from a specific provider and so on ?

  • @crystal14w
    @crystal14w Год назад +2

    This was great! I was able to build it with no problem 😄 the only issue I had was the human photo being outdated so I tried to upload a new photo but it didn’t update.
    Major warning ⚠️ to those who test their apps alot. Don’t waste your free API, because OpenAI will ask you for your card number and take away $5 😢 I didn’t know that was a thing until now. I built another project with OpenAI API so just keep tabs everyone 🙏
    This was a great video! Thanks so much 👏

    • @alejandro_ao
      @alejandro_ao  10 месяцев назад +2

      hey there, that's a good point! oh that's strange. anyways, you can now use the latest streamlit chat module, which allows you to create a chat-like UI with a few lines instead of building it all in HTML and CSS like we did here :)

  • @junyang1710
    @junyang1710 Год назад +4

    you are such a good teacher, everything is explained so clearly. Thank you!

  • @deveshkumar84
    @deveshkumar84 3 месяца назад

    This helped me a lot to understand and build my first project related to LLM. It is an amazing tutorial which gives you a clear explanation regrading the methods and processes being used which is required for making any modifications to the project.
    I am facing some issues with the installation of instructor Embedding which shows why people prefer to use API calls instead of running on their own hardware. (You don't have to worry about maintenance and everything become easier to implement with API calls.)

    • @alejandro_ao
      @alejandro_ao  2 месяца назад

      Great to hear! Indeed, using a LLM API allows you to outsource all these tedious setup and also all the updates for the new LLMs :)

  • @iftrejom
    @iftrejom Год назад +17

    Thank you, man! I had so much fun replicating this project, I feel I learnt a lot with it. I am a AI student and this is the kind of content that make candidates appealing to employers. I will try to build up some projects of my own with all the great stuff I just learnt.

    • @alejandro_ao
      @alejandro_ao  Год назад +3

      that's awesome mate! keep building side projects and don't forget to look back to see your progress 💪

    • @deekshithkumar2153
      @deekshithkumar2153 Год назад +1

      Can you please answer this, Why am I not getting any output as shown in in video other than this
      load INSTRUCTOR_Transformer
      max_seq_length 512
      load INSTRUCTOR_Transformer
      max_seq_length 512
      Is it a problem with my system specifications or anything else?

    • @alangeorge1090
      @alangeorge1090 Год назад

      Even I'm currently facing the same issue, still unresolved :(@@deekshithkumar2153

    • @mohammedalqaisi7114
      @mohammedalqaisi7114 Год назад +1

      @@deekshithkumar2153 I'm having the same problem have you found a solution? maybe the data are not loaded into the faiss correctly idk?

    • @aishu2623
      @aishu2623 11 месяцев назад +1

      Sir a small doubt in this project can we upload any pdf and ask questions or we need to upload the same pdf what the person has uploaded?

  • @MrBekimpilo
    @MrBekimpilo Год назад +1

    This is one of the best tutorials ever, caters to a wide audience. The explanations and everything were on point.

    • @alejandro_ao
      @alejandro_ao  Год назад +1

      thanks mate, i appreciate it

    • @MrBekimpilo
      @MrBekimpilo Год назад

      @@alejandro_ao you welcome. I will reach out sometime via email.

  • @RickeyBowers
    @RickeyBowers Год назад +20

    Your pacing and coverage of material is excellent! A progressive external database seems like some future steps. Could support multiple applications, caching at the file level. I can imagine querying a project (selection of files). Suppose it could get more meta - making decisions based on response content.
    Really, looking forward to wherever you take us!

    • @alejandro_ao
      @alejandro_ao  Год назад +1

      absolutely, there are so many ways that these applications can be scaled up for your own projects! keep it up :)

    • @mrudulasawant4677
      @mrudulasawant4677 4 месяца назад

      @@alejandro_ao can we use python 3.10?

    • @ryanvk8318
      @ryanvk8318 Месяц назад

      how to deploy it? Help!

  • @wapoipei
    @wapoipei 5 месяцев назад

    I've been searching for this topic with working samples and you gave us a full working project. You have a gift in teaching, keep it up mate. Thank you Alejandro!

  • @weiimyi
    @weiimyi Год назад +5

    Nice video! I like how you mention all the little details people will miss. Video deliver is clear throughout. Keep up the work!

  • @dswithanand
    @dswithanand 9 месяцев назад

    explained in very simple way and anyone starting beginner to advance can easily digest the content of the video. successfully completed the project. thanks bro

    • @alejandro_ao
      @alejandro_ao  9 месяцев назад

      very glad to hear this! keep it up!

  • @ScottHufford
    @ScottHufford Год назад +42

    🎯 Key Takeaways for quick navigation:
    00:00 🤖 The video tutorial aims to guide the building of a chatbot that can chat with multiple PDFs.
    00:38 ❓ The chatbot answers questions related to the content of the uploaded PDF documents.
    01:33 🔧 The video tutorial also covers the setting up of the environment, including the installation of necessary dependencies like Python 3.9.
    02:14 🔑 After setting up the environment and installing dependencies, the video progresses to explain the installation of other needed components to execute the task.
    03:38 👩‍💻 The video demonstrates the design of a graphical user interface (GUI) using Streamlit imported as 'St'.
    05:44 🎨 The sidebar of the GUI contains a file-upload feature for the chatbot to interact with PDF documents.
    07:11 🗳️ A 'Process' button is added to the sidebar as an action trigger for the uploaded PDF documents.
    08:57 🗂️ The tutorial explains how to create and store API keys for OpenAI and Hugging Face in an .env file.
    12:26 📄 The video further explains how the chatbot operates: it divides the PDF's text into smaller chunks, converts them into vector representations (embeddings), and stores them in a vector database.
    14:17 🧲 Using these embeddings, similar text can be identified: when a question is asked by a user, it converts the question into an embedding and identifies similar embeddings in the vector store.
    15:28 📚 The identified texts are passed to a language model as context to generate the answer for the user's question.
    19:54 🧩 The video guides the viewers to create functions within the application to extract the raw text from the PDF files.
    23:37 📋 The video further shows how to encode the raw extracted text into the desired format.
    25:03 ✂️ The tutorial provides guidance on creating a function to split the raw text into chunks to feed the model.
    25:28 📜 The presenter explains how to create a function that divides the text into smaller chunks using a library - Laungchain, which uses a class called 'character text splitter'.
    29:58 🌐 The presenter introduces OpenAI's embedding models for creating vector representations of the text chunks for storage in the Vector store.
    31:37 🏷️ The instructor model from Hugging Face is introduced as a free alternative to OpenAI's and is found to rank higher in the official Hugging Face leaderboard.
    33:59 💽 The speaker explains how to store the generated embeddings locally rather than in the cloud using Files from Langchain, a database to store numeric representations of text chunks.
    36:06 ⏱️ Demonstrates how long it could take to embed a few pages of text locally with the instructor model compared to the Open AI model.
    40:07 🔄 The host introduces conversation chains in Langchain, which allow for maintaining memory with chatbot and enabling follow-up questions linked to previous context.
    44:17 🧠 The presenter details how to use conversation retrieval chains for creating chatbot memory and how it aids in generating new parts of a conversation based on history.
    48:05 🔄 The speaker covers how to make variables persistent during a session using Streamlit's session state, useful for using certain objects outside their initialization scope.
    50:23 🎨 The presenter proposes a method of generating a chatbot UI by inserting custom HTML into the Streamlit application, offering fine-tuned customization.
    51:05 📝 The presenter introduces a code pre-prepared to manage CSS styles of two classes - chat messages and bots. Styling is discussed with reference to images and HTML templating for distinct user and bot styles.
    53:07 🔂 The presenter shows how to replace variables within HTML templates, using Python's replace function. By replacing the message variable, personalized messages can be displayed using the pre-arranged template.
    57:42 🗣️ The speaker demonstrates how to handle user input to generate a bot's response using the conversation object. The response is stored in the chat history and makes use of previous user input to generate context-aware responses.
    01:00:14 🔄 A loop is introduced to iterate through the chat history. Messages are replaced in both the user and bot templates resulting in a more dynamic conversation history displayed in the chat.
    01:03:14 💬 The host highlights how the chatbot is able to recall context based on the user's previous queries. The AI remembers the context from previous messages and appropriately answers new queries based on that.
    01:03:27 🔄 The speaker introduces how to switch between different language models, using Hugging Face models as an example. These models from Hugging Face can be used interchangeably with OpenAI's with minor adjustments in the code.
    01:06:00 🔁 The presenter demonstrates how the system works using different models. The response from the Hugging Face model is fetched in the same manner as the previous OpenAI model.
    Made with HARPA AI

  • @adriangheorghe8814
    @adriangheorghe8814 11 месяцев назад +1

    I have been dreaming of something like this for months, great work. I can't wait for the video on persistent vector stores, a real game changer.

    • @alejandro_ao
      @alejandro_ao  10 месяцев назад +1

      in next week’s video i use a persistent vector store :)

    • @akarunx
      @akarunx 9 месяцев назад

      @@alejandro_ao Any updates on persistent vector stores? Eagerly waiting for.

  • @tictaco31530
    @tictaco31530 Год назад +3

    Very nice and thanks very much for sharing!! With little experience got this to work and I see a lot of potential.
    It should be possible to save and load a FAISS index file. But I'm not able to get this to work. So instead of uploading a lot of PDF's each time you could access an already generated - and saved - vector store. Also an option to append PDF's later on would be nice. And... does the vector store have info on what comes from which pdf? And some metadata about the pdf's? Goal: to see the creation date or modified date. To see when that info was created (and may be outdated now ;-) Or to determine which info is newer and older.
    And a plus one on dr. Kónya's question. Would be nice to see the references of where the answer was based on.

  • @Tsardoz
    @Tsardoz 8 месяцев назад

    Great tutorial but I found a huge difference between LLMs. For my case I had to introduce "llm = ChatOpenAI(model_name="gpt-4-0125-preview")" before I started getting decent results. This model was also able to draw on its own knowledge of the external world rather than rely solely on the pdfs I gave it. I'd love to see a follow up of how these trained models can be saved for later use to avoid training costs each time.

  • @MZak-js7oy
    @MZak-js7oy Год назад +4

    Thank you so much for the detailed explanation. one curious question as i'm planning to use instructor model locally.
    how to store the embeddings db locally instead of reprocessing it everytime you initialize the app?

  • @pickelbarrelofficial1256
    @pickelbarrelofficial1256 Год назад +1

    You are so good at explaining this, you've got a real talent there.

    • @alejandro_ao
      @alejandro_ao  Год назад

      the student has 50% percent of the merit ;)

  • @scottregan
    @scottregan Год назад +7

    Hey mate, thanks so much. This is my first ever coding and I am thrilled to have it working.
    However, like many others, I am hitting the token limit-- I know this is super obvious to anyone with tacit knowledge, but you've made a beginner's guide so so bear with us. I assumed langchain would take care of this and only "call" the LLM for relevant chunks?. Otherwise, what is the point of this whole project? This is my error: "This model's maximum context length is 4097 tokens. However, your messages resulted in 20340 tokens. Please reduce the length of the messages."

    • @charlesd774
      @charlesd774 Год назад +1

      you cant send the entire conversation each time, you have to cut it off at some point. another option is to generate some kind of summary of each message so you can send in summaries instead. This is from a thread on openai forums

  • @prerithsagar5694
    @prerithsagar5694 4 месяца назад

    Man you deserve more followers.The quality that you provide is unmatchable.Please do videos on branch chaining multiple LLM in langchain

    • @rouge-tl8ks
      @rouge-tl8ks 4 месяца назад

      Hi, how were you able to integrate OpenAI portion as it is now free now. Did you purchase it?

  • @thiagocorreaNT
    @thiagocorreaNT Год назад +5

    Congrats, great content!
    How can I show the PDF link that the response refers to?

  • @arielwadyese7091
    @arielwadyese7091 Месяц назад

    Thanks for making such high quality, descriptive content, wish you an amazing rest of the year.

    • @alejandro_ao
      @alejandro_ao  29 дней назад

      thank you! an amazing rest of the year to you as well :)

  • @GrahamAndersonis
    @GrahamAndersonis Год назад +4

    Great video! Question-when you have mixed pdf (text and tables) do you need to preprocess the tabular data in some way…like format/convert the inline table to a CSV string, or is Pypdf doing enough preprocessing so the table rows can be ingested?

    • @alejandro_ao
      @alejandro_ao  Год назад +4

      hey there! pypdf works pretty well with pdfs that are only text and ideally compiled directly from a text editor. if you have more complicated files, with tabular data (or scanned documents from a photo), i recommend you perform OCR on them to be sure that you get all the data form it.
      when the file contains tabular data or is hard to process, i usually use pdf2image to convert the file to image and then use pytesseract.image_to_string to do OCR on it. i hope this helps!
      sorry for the late reply, i'm out in summer vacation! and thanks for the tip 💪

    • @GrahamAndersonis
      @GrahamAndersonis Год назад

      @@alejandro_ao myself, I’ve been pre-converting pdfs to MS Word (direct word import) and then exporting table objects to pandas dataframes. Text objects are treated normally. Every object has an index for inline ordering.
      I haven’t tried it-you might be able to use Adobe Extract API.
      Question-Have you tried the pre-converting the pdf-to-Word approach? This can be automated, btw. Iterating with python-docx is easy.
      If so, does that behave better than converting to image? Thanks for a great channel!

  • @gbengaomoyeni4
    @gbengaomoyeni4 11 месяцев назад +2

    Wow! This guy is simply brilliant! Continue the good work bruh. You just gat a subscriber!

  • @guanjwcn
    @guanjwcn Год назад +11

    Thanks for the insightful videos as always, Alejandro! Could you also do a tutorial on persistent vectorstore? For the same set of docs, if the app is refreshed, the embeddings of the docs would need to be re-done, which might not be cost effective if openai embedding is used. Not sure whether persistent vectorstore like pinecore would allow embeddings to be saved on local disk from its first use and the app can just read from there subsequently.

    • @alejandro_ao
      @alejandro_ao  Год назад +28

      hey there, thanks :) sure. indeed, in this example, the vectorstore is in memory, which means that it will be deleted when you refresh the app. pincone, as far as i know, works only on the cloud. but for local storage i'd probably go for either qdrant or chroma. i'll make a video about that soon!

    • @lordmelbury7174
      @lordmelbury7174 Год назад +6

      @@alejandro_ao A Langchain + Qdrant vid would be really useful! 👍👍

    • @Sergio-rq2mm
      @Sergio-rq2mm Год назад

      @@alejandro_ao Could you not write the vectorstore variable to file and then source it later?

    • @mairex9978
      @mairex9978 Год назад +1

      chroma could be a solution, you can try it out

    • @tictaco31530
      @tictaco31530 Год назад

      +1

  • @FunLau-u9e
    @FunLau-u9e 24 дня назад

    Thank you so much for this video! 🎉 Your explanations were super clear and easy to follow. I really appreciate the time and effort you put into breaking down each step - it made all the difference! 🙌

    • @FunLau-u9e
      @FunLau-u9e 24 дня назад

      for those who find some dependency causing error:
      TypeError: INSTRUCTOR._load_sbert_model() got an unexpected keyword argument 'token’
      > downgrade sentence-transformer==2.2.2
      ImportError: Dependencies for InstructorEmbedding not found
      > downgrade huggingface-hub==v0.25.2

    • @alejandro_ao
      @alejandro_ao  23 дня назад

      it is great to hear this! let me know if you have any questions!

  • @topanimespro
    @topanimespro Год назад +6

    Hello, I wanted to express my gratitude for this tutorial. I'm curious to know if the concepts discussed here can also be applied to PDFs that are not primarily written in English (applicability to other languages such as Arabic or French)?

  • @BrandonFoltz
    @BrandonFoltz Год назад +5

    I cannot believe I got this running (because I am a coding idiot). EXCELLENT work.
    Do you know if there is a simple way to get the chat to display in reverse? I.e. the latest query/response is at the top so you don't have to scroll down each time?
    Keep up the great content. You are on your way.

    • @alejandro_ao
      @alejandro_ao  Год назад +8

      thank you man! i'm glad got this to work 💪 to display the chat in reverse, you just need to reverse the array containing the messages before displaying it. you can add these 2 lines and then loop through this new array:
      reversed_messages = st.session_state.messages
      reversed_messages.reverse()
      you need to run the `reverse()` method in a new variable to not mess up the messages history you have.
      ps. your videos are gold btw

    • @BrandonFoltz
      @BrandonFoltz Год назад +2

      @@alejandro_ao I will give that a try!
      Very kind of you to say my friend. Lots of us out here just trying to do good work and help others learn.
      Our viewers are the gold; we just provide the light so they can shine.

    • @riyajatar6859
      @riyajatar6859 Год назад

      def handle_userinput(user_question):
      response = st.session_state.conversation({'question': user_question})
      st.session_state.chat_history = response['chat_history']
      chat_list = st.session_state.chat_history
      # rev_msg = st.session_state.chat_history
      # chat_list.reverse()
      # st.write(st.session_state.chat_history)

      USER_INPUT = np.arange(0,len(chat_list),2).tolist()
      BOT_RESPONSE = np.arange(1,len(chat_list),2).tolist()
      USER_INPUT.reverse()
      BOT_RESPONSE.reverse()
      for i,j in zip(USER_INPUT,BOT_RESPONSE):
      st.write(user_template.replace(
      "{{MSG}}", chat_list[i].content), unsafe_allow_html=True)
      st.write(bot_template.replace(
      "{{MSG}}", chat_list[j].content), unsafe_allow_html=True)

    • @MirthaJosue
      @MirthaJosue Год назад +2

      ha, ha, ha... I felt the same way until I watched this video

  • @jugjiwanseewooruttun7198
    @jugjiwanseewooruttun7198 11 месяцев назад

    Thank you Alejandro, it is very well explained succinctly. Your clarity in explaining the steps made it easy. You are valuable.

  • @GuruShankar-h1s
    @GuruShankar-h1s Год назад +3

    Hello Sir, Thank you for this amazing tutorial.
    I have implemented using the HuggingFaceInstructEmbeddings for embeddings and HuggingFaceHub for the conversation chain.
    I am getting the below error:
    ValueError: Error raised by inference API: Input validation error: `inputs` must have less than 1024 tokens. Given: 1080
    Please guide on how we can resolve this issue.
    Thanks :)

  • @seanjames1626
    @seanjames1626 Год назад +2

    I have definitely subscribed! Great work. Thank you!

  • @qwerto-ye5pe
    @qwerto-ye5pe Год назад +2

    Hello and thank you for this project, I just wanted to ask if there's a better way to split the text, for example, wouldn't be better breaking the text after a "." or a ","?

    • @rulesmen
      @rulesmen Год назад

      Breaking the text after a n/ means you are spliting by parahraphs instead of sentences.

  • @donconkey1
    @donconkey1 Год назад +2

    Excellent Video!! You are an great teacher and a master of the material you present. Thanks your videos really help and save me a lot of time.

  • @DadCooks4Us
    @DadCooks4Us 6 месяцев назад +10

    Some of the content is deprecated. Following through the content as I am trying to learn becomes a but difficult. Are you planning on updating this?

    • @RajkumarRavi21
      @RajkumarRavi21 4 месяца назад +2

      Video released one year back, langchain giving frequent updates so it is good to refer to the latest documentation

    • @johnfakes1298
      @johnfakes1298 Месяц назад +1

      @@RajkumarRavi21even their documentation is deprecated in some places lol I was looking at it last night

    • @khizarstudy2095
      @khizarstudy2095 4 дня назад

      I was looking at it today morning​@@johnfakes1298

  • @giraffa-analytics
    @giraffa-analytics 2 месяца назад

    I love your style and learn a lot from the videos! Thank you!

  • @VladimirBalko
    @VladimirBalko Год назад +19

    🎯 Key Takeaways for quick navigation:
    00:00 📝 This video tutorial demonstrates building a chatbot application that allows users to interact with multiple PDFs simultaneously.
    04:20 🛠️ The tutorial uses Streamlit to create the graphical user interface for the application, enabling users to upload PDFs and ask questions.
    10:20 🔐 API keys from OpenAI and Hugging Face Hub are used to connect to their APIs for language models and embeddings.
    16:39 📚 The application processes PDFs by converting them into chunks of text, creating embeddings, and storing them in a vector store.
    24:07 🔢 The large text from PDFs is split into smaller chunks to be fed into the language model for answering user questions.
    25:28 🧩 The tutorial demonstrates how to divide text into chunks using the "character text splitter" class from the "LangChain" library.
    29:31 📚 Two ways to create vector representations (embeddings) of text chunks: OpenAI's paid embedding models and the free "Hugging Face Instructor" embeddings.
    32:35 🏭 Demonstrates how to create a vector store (database of embeddings) using OpenAI's embeddings or Hugging Face's Instructor embeddings. The Instructor option is free but can be slower without a GPU.
    35:51 🕑 Processing time comparison: OpenAI's embeddings processed 20 pages in about 4 seconds, whereas Instructor embeddings on CPU took around 2 minutes for the same task.
    41:00 💬 Utilizing "conversation chain" in LangChain to build a chatbot with context and memory for a more interactive experience. Demonstrates how to create and use the conversation object.
    51:05 💻 The video demonstrates how to create templates for styling chat messages (CSS) in a Python app for displaying chatbot conversations.
    52:15 📜 CSS is imported and added to the HTML template for styling the chat messages in the Python app.
    54:10 🔄 The Python function `replace` is used to personalize the chat messages and display user-specific messages in the bot template.
    56:41 📝 User inputs are handled to generate responses using a language model (OpenAI or Hugging Face) and displayed with a chat-like structure.
    01:04:07 🏭 The tutorial shows how to switch from using OpenAI to Hugging Face language models in the Python app for chatbot interactions.
    Made with HARPA AI

    • @alejandro_ao
      @alejandro_ao  Год назад +3

      cool

    • @texasfossilguy
      @texasfossilguy Год назад

      wow

    • @Sahil-ev5pm
      @Sahil-ev5pm Год назад

      @@alejandro_ao Good project but how to host this to showcase in our resume please guide for the same.

  • @top_1_percent
    @top_1_percent 9 месяцев назад +1

    Thank you son! You have made this video so step by step that a complete beginner like me even in python was able to follow and understand everything. This is helping me a lot in my current assignment. Although with the new version of Python in Feb 2023, Faiss CPU does not work and Instructor XL is also not the leader but this video cleared so many doubts and concepts of mine that I can dig further and close those gaps with other libraries. God Bless you and keep your purpose of sharing knowledge alive. Not everyone can do this in such an efficient and easy way. Cheers!

  • @GraceLiying
    @GraceLiying Год назад +6

    Hi Alejandro. Thank you so much for making this video. This is extremely help to me. I followed your tutorial and made my own pdf chatbot. I also made a cool testing if you are interested in. ruclips.net/video/EynIc0Shgrw/видео.html. I utilized a fictitious document to prevent the LLM from accessing its existing knowledge, and it was doing well. I noticed some problems of current code. Once the conversation became longer, the session_state may lost chat_history. But overall this is a very fun project to work with. Keep up with your excellent work!

  • @ronicksamuel2912
    @ronicksamuel2912 10 месяцев назад +1

    that was a great detailed and direct tutorial, you are a good teacher.💪💪

    • @alejandro_ao
      @alejandro_ao  10 месяцев назад

      Thank you!! I appreciate it

  • @dipitjaywant8044
    @dipitjaywant8044 Год назад

    It is a great video. Gives thorough understanding of the topic. I got the entire thing working. My question is while pushing the whole project to github how to hide the openai api key , at the same time make available to the streamlit cloud for sharing it as project link.

  • @alejandro_ao
    @alejandro_ao  Год назад +10

    Hey there! Let me know what you want to see next 👇

    • @EntertainmentDoseByAkash
      @EntertainmentDoseByAkash Год назад

      Me also doing the same. However, what's your charges approx. per project?

    • @pyw
      @pyw Год назад +2

      amazing, can the app response answers with the original pdf context?

    • @EntertainmentDoseByAkash
      @EntertainmentDoseByAkash Год назад

      Yes anything can be answers except images. But accuracy and speed is low

    • @alejandro_ao
      @alejandro_ao  Год назад +1

      ​@@pyw hey there, yes that's the idea. the app responds only with the context in your PDF files. regarding images, it would depend on the images in your doc, but in some cases we could make the app read that too :)

    • @sushantraikar1
      @sushantraikar1 Год назад

      I have dropped you an email with the request. Please have a look and let me know

  • @theophilus4723
    @theophilus4723 Год назад +2

    Thank you so much Alejandro! The content was great. The explanation was clear and concise. Looking forward for more contents like this. Great job!

  • @alejandro_ao
    @alejandro_ao  9 месяцев назад +8

    💬 Join the Discord Help Server: link.alejandro-ao.com/981ypA
    ❤ Buy me a coffee (thanks): link.alejandro-ao.com/YR8Fkw
    ✉ Join the mail list: link.alejandro-ao.com/o6TJUl

    • @qwadwojohn2628
      @qwadwojohn2628 7 месяцев назад

      Hi Alejandro, any help on how I can setup the remote GitHub repository?

  • @waytojava1928
    @waytojava1928 Год назад +1

    This is great work. Congratulations and I will support you. Couple of questions: 1) Will I have to upload my pdfs everytime start the project, can we fix that to store details in some files ? 2) can it point out to the pages where it concludes information from ? 3) Can you move the chat text box to the bottom rather than the top just like chat gpt and always focus on the end of the page after response ?

    • @vr6191
      @vr6191 Год назад

      Bro could you help me ?
      I like worked on this code and in the function handle_userinput ,it says the st.session_state.conversation is a string and it's not callable ,same for the next line chat_history

  • @nightpanda3947
    @nightpanda3947 7 месяцев назад

    This is a late couple of questions but if you could answer then I would be very grateful :) 1.) What other video mentions how to make FAISS persistent? 2.) In another video do you show how to return the pdf docs and page numbers of the response?

  • @aldotanca9430
    @aldotanca9430 Год назад

    Thanks, I particularly appreciated the detailed explanation of the process. Very clear.
    I am planning on an application which will use a large corpus of text and it is likely to be unfunded, so I am finding it hard to decide on what approaches to follow, given new stuff seems to come up every week.
    But I think I will give this approach a go, as a proof of concept at least, and move from there.

  • @tonyww
    @tonyww 11 месяцев назад

    Thank you so much for your high-quality technical walk through of the project. I found it very fascinating.

  • @minhphuongle8017
    @minhphuongle8017 3 месяца назад

    Very good and clear and easy-to-understand tutorial thank you so much

  • @rainbowtrout8331
    @rainbowtrout8331 Год назад

    The way you explain each step is so helpful! Thank you

  • @ronan4681
    @ronan4681 Год назад +1

    Thank you Sir, one of the clearest instructional videos I have watched. Look forward to following your videos

  • @nikolas.adhiarta
    @nikolas.adhiarta 3 месяца назад

    thanks I am lucky to find this content which is very helpful for my work. Greetings from Indonesia

  • @armandopena3272
    @armandopena3272 9 месяцев назад

    Well done! Congratulations. So far, this has been the clearest tutorial on the topic.

    • @alejandro_ao
      @alejandro_ao  9 месяцев назад

      thank you! i'm glad to hear that :)

  • @federiconobili6038
    @federiconobili6038 Год назад +1

    extremely high quality tutorial! Congratulations! It was extremely helpful. A further step forward would be to store the pdfs' embeddings into a database so that every time that you close your application, you have not to upload your pdfs again. Any suggestion? Thanks. I'm a new subscriber of your channel.

  • @swithmerchan92
    @swithmerchan92 Год назад

    you are a master sensei .... masters of masters THANKS

  • @karannesh7700
    @karannesh7700 11 месяцев назад

    This video is pure gold! Thanks @Alejandro great work! helped me a lot !

  • @antarikshverma8999
    @antarikshverma8999 Год назад

    Thank you for clean and lucid explanation

  • @sirishkumar-m5z
    @sirishkumar-m5z 3 месяца назад +1

    The good news is that LLaMA 3.1 is free to access! A 405B parameter model has enormous potential. I'm excited to see creative uses! # LLaMA, # AI, #HuggingChat

    • @alejandro_ao
      @alejandro_ao  3 месяца назад

      wonderful time to be alive indeed

  • @hesynergy
    @hesynergy 2 месяца назад

    Absolutely superb instruction and ideas. It took me almost 2 years of RUclips’s to know what the hell you’re talking about but I actually think that I might be able to implement this and learn from your code here in order to come up with something that was no thank you so very much ! I’m sending protective and healing vibes to you and your loved ones
    oh… What about adding whisper so we can hear out chatbot as well as generate text?
    Namaste
    Chas

  • @neilsmith6638
    @neilsmith6638 Год назад +2

    Hi - i really liked this - as a 'low coder' - it was still very useful.
    Quick question - would the original document be 'shared' with the database? Or is stored in such a way that it is meaningless to any potential hackers?
    I'm trying to work out how this could be used with confidential documentation.

  • @laurentlemaire
    @laurentlemaire Год назад

    Excellent video! Thanks for describing it so clearly and with the helpful git repo.

  • @Tejas07777
    @Tejas07777 Год назад

    best video so far on LLMs 🔥🔥🔥🔥

  • @techandprogramming4688
    @techandprogramming4688 Год назад +1

    Great content! Thanks for sharing all the knowledge so beautifully and smartly, without getting things complicated.
    Also, I would like to say that please more & more of COMPLEX projects for us, LLM as a product or a complete software product, and also some things on LLMOps

  • @ShashankKumarDubey-j9p
    @ShashankKumarDubey-j9p 8 месяцев назад

    Just an amazing project, got understand each and everything very clearly. One more thing can you please share those pdf files too which you uses for getting answers.

  • @ssgoh4968
    @ssgoh4968 Год назад

    Best tutorial ever. Very organised and easy to follow and understand.

    • @alejandro_ao
      @alejandro_ao  Год назад +1

      probably cause you’re the best learner ever 😎

  • @ShikharDadhich
    @ShikharDadhich Год назад

    Awesome video! I am able to follow and run exactly what you did, thanks a lot man!

  • @nameunknown007
    @nameunknown007 11 месяцев назад

    Thanks a lot buddy, it is my first time using all these components and the AI understanding and responding to some random PDF I uploaded gives so much joy hahaha thanks again!

    • @alejandro_ao
      @alejandro_ao  10 месяцев назад

      keep it up, you're doing great! and thanks for the tip!

  • @jamesallison9725
    @jamesallison9725 Год назад +1

    Terrific tutorial, you are a born teacher :)

  • @samsquamsh78
    @samsquamsh78 Год назад

    fantastic video and great pace and explanations of each steps and functions. I subscribed!

  • @learnthetech7152
    @learnthetech7152 Год назад +1

    Hi Alejandro, this is a superb tutorial and thanks so very much for this. Like me, am sure many have got inspired by this. And you know what, I saw it is an hour long video, but at no point I felt it to be so long, its super engaging.

    • @alejandro_ao
      @alejandro_ao  Год назад

      you are amazing, thank you for being around! i have more videos coming :)

  • @sv6496
    @sv6496 Год назад

    Excellent video and very detailed. Thanks for making the effort to explain every minute detail.
    I'm able to execute this only for the first question. For the next question that I type in.. I'm getting the below error:
    "TypeError: can only concatenate str (not "tuple") to str"
    I added some breakpoints to see where this is breaking and looks like the 'response' variable is unable to handle (expecting a str, getting a tuple)
    def handle_userinput(user_question):
    response = st.session_state.conversation({'question': user_question})

  • @gabrudude3
    @gabrudude3 Год назад +2

    Great video! Thanks for putting it together.
    Quick question, when using personal PDFs with sensitive info., will using OpenAI expose the PDF text to the public, or with embeddings and vector DB it is unlikely? Or, would you recommend using the HuggingFace open source model like Instructor locally for private data?

  • @woojay
    @woojay Год назад

    Thank you so much. This was super helpful for my own that I was building.

  • @valeriociotti7904
    @valeriociotti7904 Год назад +1

    Hi, is there a reason why when I run the code with the local LLM like in the example, when I ask a question related to the document I get the error: ValueError: Error raised by inference API: Input validation error: `inputs` tokens + `max_new_tokens` must be

  • @kirthiramaniyer4866
    @kirthiramaniyer4866 10 месяцев назад

    Very thorough in explaining - good tutorial! Thanks

  • @KeithWatson-f2q
    @KeithWatson-f2q Год назад +1

    I had to add these two lines to the requirements.txt file in order for it to work:
    altair

  • @sammriddhgupta5614
    @sammriddhgupta5614 10 месяцев назад

    Awesome video!! Concise explanations, and it works with openai, thank you!

  • @kyrsid
    @kyrsid 7 месяцев назад

    nice video. you say "there you go" repeatedly. good work.

  • @maria-wh3km
    @maria-wh3km Год назад

    You are awesome, well presented and the code is so clean and perfect. Big thank you!

  • @ranjan25k
    @ranjan25k Год назад +1

    Great quality and it cleared lot of doubts I had.
    One quick question, the two pdf's you used as example those are not changing and are static, then is it not viable to ingest the data once into vectorstore and use it in all future q&A. Is it possible? or something you can hint how to do. One thing that comes to mind is may be processing part should not be part of chatbot. So, I ingest and create vectorstore using my API Keys. Now in the chatbox, I need reference to those vectorstore so that I can play with LLM models and perform q&a on it rather than uploading it everytime.

  • @mireillemakary9529
    @mireillemakary9529 Год назад +2

    Hi,
    thank you for the clear tutorial, i was wondering though, when you read the user input, according to the diagram, it should have been converted to an embedding vector before conducting the search, is this automatically done when calling st.session_state.conversation({'question': user_question})?

  • @guptarohyt
    @guptarohyt Год назад +1

    This is Amazing, demystify lots of things. Thanks for sharing this knowledge. I would like to know if there is a possibility to search the databases instead of the pdfs?

  • @swithmerchan92
    @swithmerchan92 Год назад

    I am very new, super new to this, but the truth for me is a great pleasure and I thank you very much for the help you have given me, I am already doing the documentation of my work, but your chatbot is the basis of my whole project, if you have any recommendation that you can give me in terms of documentation I would appreciate it, I really do not know how to pay you this special topic that you have done, I have given like and I have subscribed to your channel, tedoy a million thanks for helping me, you are super great, thank you from the bottom of my heart. ...

  • @FirstSolve
    @FirstSolve Год назад

    U are the best tutor of AI .. Thanks a lot brother . For explain things ..
    just want to ask few cases and please let me know if it’s feasible or not ?
    Can u combine a open AI chatgpt + multiple pdf in single application . Or chatbot .
    Cases are :
    If >> The user wants to know things from open internet then user can click a check box to enable open Internet and ask any questions.
    Else if >> User can provide a website and web scrap it and retrieve the responses from that website .
    Else >> user can use its local multiple pdf files to extract the response .
    This will be a full solution I think.

    • @alejandro_ao
      @alejandro_ao  Год назад +2

      thank you for your kind words! and yeah, it's totally feasible. however, it will require even more customization than this app. but yeah, totally. i might do a video about some of those features soon!

  • @ninocrudele
    @ninocrudele Год назад

    Amazing content, very well explained, I immediately subscribed to you channel, please keep going !

    • @alejandro_ao
      @alejandro_ao  Год назад

      awesome, thank you! i totally will :)

  • @harshmunshi6362
    @harshmunshi6362 7 месяцев назад

    Really good tutorial! Had to adapt and make some changes for my use case, but good intro!

  • @Sulls58
    @Sulls58 Год назад

    You are an amazing teacher. well done!

  • @GEORGEBELG
    @GEORGEBELG Год назад +1

    Excellent explanation and coding. Thank you

  • @paule7656
    @paule7656 Год назад

    Thank you sooo much!! That's a great piece of educational content!

  • @sfisothecreative99
    @sfisothecreative99 Год назад

    I just had to subscribe. Great quality content!

  • @aloybanerjee9460
    @aloybanerjee9460 6 месяцев назад +1

    I tried but getting ImportError: Dependencies for InstructorEmbedding not found. always, even I tried downloading the model locally and run but still same error
    can you kindly help

  • @marciorodriguesmota7927
    @marciorodriguesmota7927 Год назад +4

    Does anyone know how to solve this error or had it too?Does anyone know how to solve this error or had it too? Retrying langchain.embeddings.openai.embed_with_retry.._embed_with_retry in 4.0 seconds as it raised RateLimitError: You exceeded your current quota, please check your plan and billing details..

    • @Beelpatd
      @Beelpatd Год назад +1

      same

    • @KollektivTraumland
      @KollektivTraumland Год назад

      Same

    • @Veerarajankarunanithi
      @Veerarajankarunanithi Год назад +1

      It is because of openai limitations. You need purchase tokens to use further.

    • @JunaidAzizChannel
      @JunaidAzizChannel Год назад

      You need to purchase a pay as you go plan in Open ai account settings. Once done, you will need to generate a new API key for use

    • @JunaidAzizChannel
      @JunaidAzizChannel Год назад

      You need to purchase a pay as you go plan in Open ai account settings. Once done, you will need to generate a new API key for use