Workaround OpenAI's Token Limit With Chain Types

Поделиться
HTML-код
  • Опубликовано: 22 авг 2024

Комментарии • 127

  • @leromerom
    @leromerom Год назад +27

    I appreciate that your videos always have two phases the explanation part and then you go the extra mile to explain the details, great work!

    • @DataIndependent
      @DataIndependent  Год назад +2

      Nice! Question for you, do you prefer if I do:
      * Explanation #1, Code #1
      * Explanation #2, Code #2
      or
      * Code #1, Code #2
      * Explanation #1, Explanation #2
      Unsure which method is better for everyone.

    • @leromerom
      @leromerom Год назад +4

      @@DataIndependent explanations first then code. Option #1 I think is best

    • @kunalchandra9869
      @kunalchandra9869 4 месяца назад

      @@DataIndependent The former method would be better, will be able to connect with the theoretical explanation well if the practical is done along with it.

  • @jefferychen8330
    @jefferychen8330 4 месяца назад

    This tutorial is really well-structured. I really like how you connect the current video with previous ones. Thanks so much!

  • @retardedpenguin1
    @retardedpenguin1 Год назад +5

    I very rarely click like or dislike on videos... but this one is by far, one of the most helpful videos I've found for what we're working on. You explained everything extremely clearly (unlike the langchain docs, which do not explain things well), and provided a good low-level understanding of how each chain works. Thanks so much!

    • @DataIndependent
      @DataIndependent  Год назад

      Nice!! That's great thank you for the kind words

  • @user-px1xq9im4r
    @user-px1xq9im4r Год назад +1

    Absolutely amazed. One thing you should have done is => Explain each chain type and immediately show the demo rather than do it at the end. I forgot what refine and map-reduce does as I went towards the demo.
    Other than that, hats off dude.

    • @DataIndependent
      @DataIndependent  Год назад

      I actually thought back and forth on this which would be better. I chose the method in the video (obvi) but I like the method you're mentioning as well.

  • @feffy380
    @feffy380 Год назад +6

    Correction for the refine method: the calls are *dependent*, not independent. Each call depends on the results of the previous call.

  • @vk2875
    @vk2875 3 месяца назад

    Amazing tutorial on this subject. Really appreciate your passion into detailing it in so much depths. Thank you !!!

  • @nattapongthanngam7216
    @nattapongthanngam7216 4 месяца назад

    Appreciate the clear explanation of Token Limit

  • @Incognitowil
    @Incognitowil 5 месяцев назад

    I’m glad I found this video!!!

  • @realbutters
    @realbutters 10 месяцев назад

    You just saved me hours of trial and error on a task I was about to start working on this week.
    Subbed immediately, thank you!

  • @Sami-fm3zg
    @Sami-fm3zg Год назад +6

    Top notch explanations, thanks. Would be helpful to have a Typescript tutorial as well tho if you ever have some time :)

    • @DataIndependent
      @DataIndependent  Год назад

      Thanks! It'll just be python for now but I'll keep this in mind. Checkout the LangChain discord for more ts help

  • @xorlop
    @xorlop Год назад +1

    Wow, I am just stunned. This video is so helpful and informative. Thank you so so much!

    • @DataIndependent
      @DataIndependent  Год назад

      Nice! That's great. Soon it won't be as big of a deal with gpt4-32k

  • @mikemansour1166
    @mikemansour1166 Год назад +2

    Wooow! Thank you so much , I was really thinking about this the other day when I saw your previous Video , This is so helpful , I am not a coder I used to use excel to do the refining method(I didn't actually have a name for it ) with GPT-3 API , but your way is more efficient and I can easily implement in my work flow , I so much appreciate it

    • @DataIndependent
      @DataIndependent  Год назад +1

      Nice glad to hear it. All the magic is with LangChain and the team putting it together.

    • @mikemansour1166
      @mikemansour1166 Год назад

      @@DataIndependent I was wondering , if you guys have paid courses ?

    • @DataIndependent
      @DataIndependent  Год назад

      @@mikemansour1166 Nope, but happy to do an intro call if you need anything. If more is needed we can do a consulting arrangement

  • @horseheadhunchback1990
    @horseheadhunchback1990 Год назад +1

    This is a great series. Thank you for your work!

  • @sifisomalinga9342
    @sifisomalinga9342 Год назад +2

    This is genius content. Thanks for your amazing work.

  • @sunshadow9704
    @sunshadow9704 8 месяцев назад

    It is very helpful. Small observation. For a refine approach , I think the steps are dependent on each other. Not independent.

  • @user-vu9fp9le9n
    @user-vu9fp9le9n Год назад

    One word- Simply Great ! Thank You for this.

  • @badrinarayanans355
    @badrinarayanans355 Месяц назад

    Really Informative 😊

  • @sup5356
    @sup5356 Год назад +1

    thank you for the going to the effort

  • @creativeuser9086
    @creativeuser9086 Год назад +2

    Why would we use the summarization method over the vector embedding and retrieval method?

    • @xiaotianxt
      @xiaotianxt Год назад

      I think the answer is simple😂, the vector embeddings and retrieval method doesn't solve the summarization problem.

  • @7dainis777
    @7dainis777 Месяц назад

    I can see video was posted a year ago. There is one better approach, which I'm not sure was available year ago.
    You can use RAG + Data Embedings. Each document chunk you can convert to vectors and store in vector store/database. Then also prompt can be converted into vector and matched against vector store/database, which will give you closest match. Then just use GPT model on best match found earlier.

  • @ArjanDuijs
    @ArjanDuijs Год назад +2

    Cheers, always learning new stuff watching your videos! def gonna try the last two methods, although what is concerning me is the cost of using openAI.
    sure, it can do the summary of a 300 page document doing the refine method.. but at what cost?
    would be interested to see what the cost is for the different solutions what are the diferences in cost, which way is more effective to run.

  • @joejoetheawesome
    @joejoetheawesome Год назад

    Brilliant explanation! thank you :)

  • @bingo101
    @bingo101 Год назад

    It's really helpful, thanks

  • @StephenPasco
    @StephenPasco Год назад

    Great videos Greg!

  • @briancleary6751
    @briancleary6751 Год назад +1

    excellent explanation as always, but your video previews always cover important parts of your slides.

  • @Archlense
    @Archlense Год назад

    best tutorial for lanchcain ever!!!!

  • @bingolio
    @bingolio Год назад

    Great job, Thx! Just subscribed :D

  • @bnmy6581i
    @bnmy6581i Год назад

    This is awesome lesson. Thx

  • @TrashPandamonium
    @TrashPandamonium Год назад +1

    The question in the end read who was the friend that he got permission from, but the text you searched and showed stated that both him and his friend got permission - based on that excerpt, the answer seemed incorrect - though you probably just searched for the wrong snippet, I guess.

  • @oryxchannel
    @oryxchannel Год назад +1

    Like your studio philosophy. More 'workarounds'. ;-)

    • @DataIndependent
      @DataIndependent  Год назад

      It's a symbiotic relationship!

    • @oryxchannel
      @oryxchannel Год назад

      @@DataIndependent _Thats_ for sure. Just wait till someone gets joining up YT comments with AI right...."Hey, wait a minute...you can't have that AI idea...That's *my* intellectual property."😆

  • @Ryan-yj4sd
    @Ryan-yj4sd Год назад

    great video!

  • @wangking7384
    @wangking7384 Год назад

    Thanks ❤you have helped me a lot🎉

  • @ujjwalgupta1318
    @ujjwalgupta1318 Год назад

    Super useful. Thanks :)

  • @kalyeibakhbyergyen7298
    @kalyeibakhbyergyen7298 Год назад +1

    i used japanese text to extract data by chunking but problem is even if i use smaller texts i get token limit error for example you requested 4103 tokens (103 in the messages, 4000 in the completion).

  • @jakobkristensen2390
    @jakobkristensen2390 Год назад

    This video is great, thanks

  • @maximchuprynsky7472
    @maximchuprynsky7472 Год назад +1

    Hi! Great video! I have a question. Is there any way of putting string insted of documents into the model?

    • @codewithbrogs3809
      @codewithbrogs3809 Год назад

      No. Use the langchain.schema.Document object. Example python code for turning list of strings into python code
      from langchain.schema import Document
      list_of_strings = YOUR LIST OF STRINGS
      list_of_documents = [Document(page_content=string) for string in list_of_strings]
      #After initializing chain and llm
      chain({"input_documents": list_of_documents, "question": YOUR_PROMPT})

  • @henkhbit5748
    @henkhbit5748 Год назад +1

    Excellent explanation using langchain methods to split a large document! Like your Langchain videos. 👍
    A small question, in your rerank example for Q&A. Where are the loaded document(s) stored? Because it would be not efficient if u need to do the reload the docs every time u asks a question or if you create a chatbot where multiple users are asking questions..

    • @DataIndependent
      @DataIndependent  Год назад +1

      The documents are stored on your local machine when you run langchain like that. Langchain will only send up the pieces of information it needs to your LLM

    • @henkhbit5748
      @henkhbit5748 Год назад

      @@DataIndependent That is what I thought, but just to be sure..😀 A follow up question: Which "InstructGPT" model is used If the question is submitted to OpenAi? Davinci I assume? Can Langchain also use the new turbo 3.5 Chatgpt API chat model which is much cheaper?

  • @dharanisugumar8699
    @dharanisugumar8699 Год назад +1

    Your videos are great helpful. Much appreciated. I've a lament question. We can acheive this by reading the doc using python script and we can get the output right. I know AI gives the result without writing many code. But what is the major difference between these two? Thanks in advance.

  • @chienvu3814
    @chienvu3814 Год назад +1

    Thank you for your work. It's amazing. But may I ask you about the slide? Can you share it for everyone ?

  • @SangyHanum
    @SangyHanum Год назад +1

    Thanks.
    nit picking but Rich Draves was the friend with him not who gave him the persmission? probably poor question more than the chain.

    • @DataIndependent
      @DataIndependent  Год назад

      Good call and good nit - agreed. The question could be better :)

  • @charlesleon8961
    @charlesleon8961 Год назад

    Another con from re rank would be the fact that the LLM will have to parse the entire document for every question right? I guess this scales from a paralellization standpoint but it could also cost a lot.

  • @alvintohw
    @alvintohw Год назад +2

    thanks for the clear explanation. So what would be a good method for questions and answers across multiple docs? Seems map re-rank is most performant but restricted to one doc

    • @DataIndependent
      @DataIndependent  Год назад +1

      Depends how many documents you have. If you have a ton, then you'll likely want to do embeddings and store them in a vectorstore so you can get the similar ones back. Check out my "question a book" video for more on how to do that.

  • @LACHIVA1969
    @LACHIVA1969 Год назад

    Yes, I was curious about these LLMs and quickly realized they are tyring to squeeze a lot of money before free opensource APIs show up. Not paying for no tokens on something that may be free in 6 months. These corporations are truly greedy. May try a month subcrisption of chatpdg and spent only $5.

    • @pythonization
      @pythonization 11 месяцев назад

      Apparently we are going to have our own trained LLM's - even on mobile devices. I suppose today's LLM's will become commoditized but way more sophisticated "supermodel LLM's" will keep everyone glued to their screens.

  • @debojitmandal8670
    @debojitmandal8670 6 месяцев назад

    How do u reduce if ur also passing ur memory in the agent bcs I am getting that error bcs of the conversation buffer memory that is mentioned in my prompt template

  • @user-tk1bn8xc3i
    @user-tk1bn8xc3i Год назад

    thanks it very very very helpful

  • @rexgloriae316
    @rexgloriae316 Год назад +2

    Thanks for the videos man. One question - how can we increase the length of the final summary? I tried a custom prompt with something like "Write a summary of a minimum of 1000 words". But it seems to cut off the returned summary.

    • @DataIndependent
      @DataIndependent  Год назад

      There is a parameter called "max_tokens" you'll want to adjust which will lengthen the output. You'll set it when you initialize your LLM

  • @Archlense
    @Archlense Год назад

    PERFECT

  • @newphotographyltd6461
    @newphotographyltd6461 Год назад +1

    Can you please provide video on how to compare two financial pdfs with large docs using gpt3.5 turbo?

    • @DataIndependent
      @DataIndependent  Год назад

      What type of comparing do you want to do?

    • @newphotographyltd6461
      @newphotographyltd6461 Год назад

      @@DataIndependent Lets take an example that page 5 of one pdf is most similar to the page 9 of the another pdf.

  • @Kevin-sv5to
    @Kevin-sv5to Год назад

    You explained how to fix this issue for text files. How do I handle big csv files?

  • @edoardodenigris213
    @edoardodenigris213 Год назад

    I tried and it works perfectly, thanks! I only have one problem: responses are in general quite short and general, 5 lines at most. How can I obtain more lenghty answers?

  • @antdx316
    @antdx316 4 месяца назад

    Aren't there programs that automatically cut the files/docs into batches then it does it by itself?
    I'm trying to search my entire Twitter History and have to split up the data in order to feed it to an LLM.

  • @sarveswarnaidu717
    @sarveswarnaidu717 Год назад

    How to implement this on a CSV data which includes the tasks to aggregate ?
    For example, I've a supply chain data and the task is to retrieve the total amount spent by a customer.

  • @diegolondrina7510
    @diegolondrina7510 Год назад

    what chunk size would you recommend? you say in the video that 400 is just for demonstration. what is overlap for?

    • @DataIndependent
      @DataIndependent  Год назад

      Chunk size depends on your use case.
      I've done 400-2000 and have had good success. Overlap, though I've used it, I haven't tested it enough to have an opinion

  • @chetan5581
    @chetan5581 7 месяцев назад

    I have question : how do we do it for csv files ? thanks a lot !

  • @caiyu538
    @caiyu538 10 месяцев назад

    Great

  • @PhilCunliffe
    @PhilCunliffe Год назад

    What if the summaries from the Map Reduce method was over the max tokens for the final summarization call?

    • @DataIndependent
      @DataIndependent  Год назад

      I *think* Langchain will map reduce it again. If not then you'll need to do that manually

  • @biswasshubendu4
    @biswasshubendu4 Год назад

    hi, I want to create MOM using documents which is slightly different than summarization, will these methods work fine?

  • @sportscardvideos
    @sportscardvideos Год назад

    Can this be done with a large CSV or only text?
    Here my problem: loaded a large amount of CSV data in Pinecone. Now my prompt is generating a response that is tool long. Thanks!

  • @ShaidaMuhammad
    @ShaidaMuhammad 7 месяцев назад

    This is amazing work.
    Has anyone developed any technique that can hold memory with LLMs? i.e. an LLM that can save the context (the complete knowledge in the prompt) in some format to a local disk (memory). The memory is attached to the LLM so it can look up in the memory if required. The memory would work like a knowledge base.
    Let me know if anyone is working on this or already worked on it. I need to dig into that.

  • @hrushikeshdas4864
    @hrushikeshdas4864 Год назад

    Damn! You are God 🙏

  • @VineetShivhare
    @VineetShivhare Год назад

    it would be really really helpful if could make a video on classification.
    say subject classification , topic classification or chain of classifications

    • @DataIndependent
      @DataIndependent  Год назад

      Sounds fun. What's a tactical example you'd like to see?

    • @DM-fw5su
      @DM-fw5su Год назад

      Taking a large document (100s pages of technical specification) and developing a classification language for content based on layout or based on conjunction of 2+ things in the document. Validating AI has a clear understanding of this new classification vocabulary. Then using that vocabulary to to query and allowing AI to use that vocabulary in its response.

  • @simple-security
    @simple-security Год назад

    Question for anyone here:
    What is your approach if you're scanning say 100 new web sites and you want openai to summarize the news articles and categorize them.
    I can see setting up a loop and get openai to create a summary for 1 site at a time.
    I can also see myself using langchain with prompts and memory to store all the results in one place and then generating the output?
    Any suggestions on how a 'research script' would scale is appreciated.
    Thank you.

    • @DataIndependent
      @DataIndependent  Год назад +1

      If you want to generate summaries, I would keep it at one summary per article per openai call
      So you'll eat a lot of tokens but the process will be straight forward

    • @simple-security
      @simple-security Год назад

      @@DataIndependent so are you saying I would use openai to provide a 'category' for the news article (one per call as you said) and then just use python to group/summarize those categories?

  • @mw3protegy1
    @mw3protegy1 Год назад

    Where do you stay up to date with the AI advancements, discord etc?

  • @grabellasrong6358
    @grabellasrong6358 Год назад

    Could you rank how much information is lost for each of the methods?

  • @rileyclubb
    @rileyclubb Год назад

    yo man, amazing videos. What do you think about building an LLM based off your RUclips channel so I can get your helpful answers to my questions?

  • @MoonDesignDev
    @MoonDesignDev Год назад

    What about langchain memory?

  • @kefalo84
    @kefalo84 Год назад

    Can you update the link?

  • @crazycouplenyc
    @crazycouplenyc Год назад

    Does pinecone remove the need for chunking? Does it have infinite memory?

    • @zzamme1505
      @zzamme1505 Год назад

      no, the doc is still split into chunks and then individual chunks are embedded into vectors which are compared against the prompt

    • @DataIndependent
      @DataIndependent  Год назад

      Yep, exactly what zzamme1 said

    • @alvarjover7081
      @alvarjover7081 Год назад

      @@DataIndependent which method is more accurate between this video or embedding in vectors? I tried this one for a book with 120K words and took 10 mins to run. Embedding in vectors would make it faster (hopefully down to 3 mins)? I just started using all this so just learning from the pros! :D thanks in advance but also thanks for your content. Top!

  • @acerishi
    @acerishi Год назад

    Is there a chain for translation in which i can apply this ?

    • @DataIndependent
      @DataIndependent  Год назад

      Not an out of the box chain. But you could do a custom map reduce chain with custom prompts for your purpose.
      Check out my latest video on AI generated emails. You’d do the same thing but with different prompts for your use case

  • @fahrikhalid3632
    @fahrikhalid3632 Год назад

    how to impment this for SQLDatabaseChain ?

  • @kuntalpcelebi2251
    @kuntalpcelebi2251 Год назад

    would you please make a video about your environment or provide your python enviroenment as well? By loading the documents, I am getting this error: UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 10160: character maps to Edit. I had to make them pdf and use: loader = UnstructuredFileLoader("layout-parser-paper-fast.pdf", strategy="fast", mode="elements")

    • @pythonization
      @pythonization 11 месяцев назад

      There are tutorials around that explain how to use "pipenv" on other channels python channels. I'm still getting started - different channels use "pipenv" or Docker or Anaconda. I suppose it's good getting comfortable with various environments - I haven't been programming for a while - also learning panda.

    • @pythonization
      @pythonization 11 месяцев назад

      Also this the only channel that a playlist of 24 videos breaking down langchain extensively - a lot of other videos are good introductory videos but this "cookbook" approach is helping me get going in programming again.

  • @OBGynKenobi
    @OBGynKenobi 11 месяцев назад

    This is nice, but I don't think any of these work for code. For example, I have a long Stored Proc and I want to generate Documentation for it, breaking it up will lose context and get all confused. Code can be self referential, ie, a variable in the first chunk might get referenced in the last chunk, but by this point context is gone.

    • @DataIndependent
      @DataIndependent  11 месяцев назад +1

      Aligned w/ you, you'll need to chunk up another way or go graph to keep the connections alive. Check out what www.mendable.ai/ is doing, they may have a chunking/retrieval technique that works for you

  • @planetcrypton9666
    @planetcrypton9666 Год назад

    How can I apply these solutions when using agents?

    • @DataIndependent
      @DataIndependent  Год назад

      Check out the agent documentation on LangChain.com for a good start
      langchain.readthedocs.io/en/latest/modules/agents.html

  • @defidutch402
    @defidutch402 Год назад +1

    Cool video!
    Some coding skills are required I guess?

  • @Fluttydev
    @Fluttydev 11 месяцев назад

    these are not good approaches for practical work, Create embedding of the large model and then write any prompt