5 Levels Of LLM Summarizing: Novice to Expert

Поделиться
HTML-код
  • Опубликовано: 24 ноя 2024

Комментарии • 138

  • @cole_mcconnell
    @cole_mcconnell 3 месяца назад +4

    Holy smokes, I'm only 6 minutes in, and this is already by far the best video on this topic on all of youtube. Your content is extremely valuable man. Please keep it up !!

  • @chrisl4211
    @chrisl4211 Год назад +10

    Hey DI, I am a industrial designer, I watch tons of youtube very single day for research and I built an bot to download captions from youtube last week and store them in json, it saved me a ton of works already and now I have time for a coffee and half hour of ps4 every day, (even I had to wrtte more then 10 hrs of code every day after and between work last year...), I was working to 4am at night in last couple of day to try to have it work summarizing long caption (many of these lines are not important...) and I kept fail. and it was like magic your video come up! even I havent watch it yet but i leave a comment first to say thank, I trust you.
    langchain doc is so messy and not clean that i am considering to redo it in Cantonese...
    Thank god you make some video and you save our life.

  • @taylorerwin807
    @taylorerwin807 Год назад +19

    This content is incredible Greg. It's helping so many of us build the tools of the future (well at least the future of our own workflows!) Thank you!

  • @xevenau
    @xevenau 8 месяцев назад +1

    Would love an update with local llm for book summaries.

  • @ujjaldeb6286
    @ujjaldeb6286 Год назад +6

    Really a life-changing playlist
    Will check after 7 years

  • @krisszostak4849
    @krisszostak4849 Год назад +6

    Here's some additional info about this kmeans array as I struggled to understand it myself (made by gpt-4):
    The output you're seeing is from the labels_ attribute of the trained KMeans model in the sklearn library in Python.
    The KMeans algorithm clusters data by trying to separate samples into n groups of equal variance, minimizing a criterion known as the inertia or within-cluster sum-of-squares. This algorithm requires the number of clusters to be specified beforehand, which is what you've done by setting n_clusters to 11.
    The labels_ attribute of the KMeans model returns an array where each element is the cluster label of the corresponding data point. These labels range from 0 to n_clusters - 1. So in your case, the labels range from 0 to 10, since you specified 11 clusters.
    To put it in the context of your specific problem, you've passed a list of vectors to the KMeans model. Each vector probably represents a portion of the book you're trying to summarize, perhaps a sentence or a paragraph, which has been transformed into a numerical vector using the langchain library.
    The array array([ 2, 2, 2, 8, 8, 2, 5, 1, 1, 7, 7, 4, 4, 9, 10, 5, 5, 5, 3, 3, 3, 0, 0, 10, 10, 6], dtype=int32) then represents the cluster assignments for each of these vectors. For example, the first three vectors were assigned to cluster 2, the next two to cluster 8, and so on.
    These cluster assignments are based on the distances between the vectors. Vectors that are closer to each other (and hence more similar) will be assigned to the same cluster. By identifying these clusters, the KMeans algorithm helps you find groups of similar sentences or paragraphs in the book. This can help in summarizing the book by identifying the key themes or topics covered.

    • @DataIndependent
      @DataIndependent  Год назад +1

      Nice! Happy to chat more on this if you have questions

  • @nattapongthanngam7216
    @nattapongthanngam7216 7 месяцев назад

    I'm immensely grateful for your enlightening series on the 5 Levels Of LLM Summarizing. The concept of chunks nearest to centroids representing summaries is brilliant and has offered me a fresh perspective. I eagerly anticipate your insights on AGENTS!

  • @feralmachine
    @feralmachine Год назад +3

    Just wanted to say your code and explanations are so coherent and easy to follow that an innumerate like me who barely knows python was able to grok the entire video played at 1.5x speed. Well done sir! Truly can't wait to try out the clustering technique.

  • @e-genieclimatique
    @e-genieclimatique Год назад +2

    in brief:
    This video demonstrates five levels of text summarization using language models, specifically focusing on OpenAI's GPT architecture.
    The video walks through the process of summarizing a few sentences, paragraphs, pages, an entire book, and an unknown amount of text.
    Level 1: Basic prompt - The presenter uses a simple prompt to summarize a couple of sentences from a Wikipedia passage on Philosophy.
    Level 2: Prompt templates - The presenter introduces prompt templates to dynamically summarize two essays by Paul Graham, showing how to create a one-sentence summary for each.
    Level 3: MapReduce method - The presenter explains how to summarize a long document by chunking it into smaller pieces, summarizing each chunk, and then summarizing the summaries.
    Level 4: Best representation vectors - The presenter demonstrates a method to summarize an entire book by selecting the top passages that best represent the book, clustering similar passages, and summarizing the most representative passage from each cluster.
    Level 5: Unknown amount of text - The video hints at a technique for handling an unknown amount of text but does not provide explicit details.
    The speaker demonstrates various techniques to summarize text using OpenAI models, progressing from novice to expert level:
    Level 1: Basic Summarization - Using GPT-3.5-turbo to summarize a single passage of text.
    Level 2: Summarizing Multiple Passages - Using GPT-3.5-turbo to summarize multiple passages and combine them into a single summary.
    Level 3: Summarizing Books - A custom method to summarize an entire book by splitting it into chunks, finding the most important chunks, summarizing those, and then combining the summaries.
    Level 4: Summarizing Books with Clustering - A more advanced method that uses clustering to find representative sections of a book before summarizing and combining them.
    Level 5: Summarizing an Unknown Amount of Text - Using agents to perform research on Wikipedia and summarize the information found.
    The speaker demonstrates how to use these techniques by summarizing a variety of text sources, including passages, books, and information found on Wikipedia.
    The video showcases the potential of OpenAI models for summarizing and condensing information while retaining the key points and insights.

  • @jakemgrim
    @jakemgrim Месяц назад

    Awesome video! Thank you for putting all of this information together! I have been wanting to learn more on how to do k value summarization with long chain. This video was exactly what I needed! Well done!

  • @IamalwaysOK
    @IamalwaysOK Год назад +2

    I just wanted to thank you for your awesome video on text summarization. Your explanations were clear, concise, and informative, and your demonstrations were really helpful in understanding the concept. Your passion and expertise on the subject really shone through and I look forward to seeing more great content from you in the future!

  • @faisalsaddique3323
    @faisalsaddique3323 Год назад +1

    The user can enter different values for map_prompt and combine_prompt; the map step applies a prompt to each document, and the combine step applies one prompt to bring the map results together.

  • @dennisnichols1374
    @dennisnichols1374 Год назад +2

    This is badass. Such a cool approach with the best representation vectors. Thanks for continuing to put out great work!

  • @chenqu773
    @chenqu773 Год назад +4

    Concise and informative, as always!

  • @jmartinezot
    @jmartinezot Год назад +4

    Great video!
    If I am not wrong, the element closest to the cluster centroid is the same as the medoid, which can be computed even if the centroid cannot, just taking the element for which the sum of the distances to the other elements is minimum.
    To choose the number of clusters (K) you can use the elbow criterion. I do not know if a Mixture of Gaussians would be a better clustering method, but it would be worth a try.

    • @DataIndependent
      @DataIndependent  Год назад +3

      Nice! Thank you for all those points. That's what I wanted to hear to see where to improve it.

  • @krisszostak4849
    @krisszostak4849 Год назад +2

    The level of clarity in your content is just insane. I absolutely love it! If I may make a suggestion though - something to consider... Because this technology is growing, changing and evolving so quickly, it would be soooo good to have something like a concept map showing all the main concepts and use cases of let's say langchain with particular ways of achieving it and links to the videos where this thing is explained :D

  • @solitone
    @solitone Год назад +2

    Level 4 is super interesting. I’ve experimented with recursive summarization, but your method promises better results as well as being cheaper. I need to try it!

    • @DataIndependent
      @DataIndependent  Год назад +2

      Nice thanks - I’m interested to hear how it works for you

  • @NS_Miata
    @NS_Miata Год назад

    I want to tell you that I am really appreciating the work you put in this tutorials! Realy, realy helpful. As an educationalist I am trying to get such a system to work for making learning plans, leassons, learning goals, examing question etc. You're work is realy helpfull and motivational to start working on a app that can make this stop just from documents. Really appreciate it!

  • @ChrisadaSookdhis
    @ChrisadaSookdhis Год назад +1

    I really like your best representation vector approach!

    • @DataIndependent
      @DataIndependent  Год назад

      Nice thank you - let me know how this works for you use case, I wanna see if it holds up in other domains or applications

  • @DimitarShishkov
    @DimitarShishkov Год назад

    Your videos are extremely well explained and the use cases and examples are top notch. The vector clustering approach is pretty ingenious. Great stuff, keep it up!

  • @Amitheous
    @Amitheous Год назад +2

    I've been working on parsing large sets of data (and struggling with balancing efficiency and comprehensiveness), and the idea to cluster and find the centers of clusters to get representative values is awesome! Definitely going to be toying around with this idea.

    • @DataIndependent
      @DataIndependent  Год назад

      Nice! Let me know how it works for you use case. Super curious to see if this lines up with more examples

  • @IIIlIIIllIl
    @IIIlIIIllIl Год назад

    I wouldn't have even considered the token limitation. Thank you for another great video.

  • @trentleslie2888
    @trentleslie2888 Год назад

    The best representation vector approach is slick! 💯

  • @LauraKayeChamberlain-vb7dd
    @LauraKayeChamberlain-vb7dd 6 месяцев назад +1

    Would LOVE to see an end-to-end setup for these 🙏

  • @mouadse
    @mouadse Год назад

    Very practical and informative video.I was waiting for the vid since I saw your Tweet. Thank you Greg

  • @xberna8156
    @xberna8156 Год назад +10

    Thumbnail 10/10

    • @DataIndependent
      @DataIndependent  Год назад +3

      I actually was going to do a pokemon and I asked chatgpt which pokemon had 5 levels and it said none of them do. So I asked for more metaphors I could do that would appeal to a developer audience and it suggested mario and then gave me the 5 names of those mario. Not bad.

  • @rodrigoniveyro9763
    @rodrigoniveyro9763 4 месяца назад

    Thanks! you helped me to get to the right path to my solution !

  • @henkhbit5748
    @henkhbit5748 Год назад +1

    Great video 👋👋Especially part 4 was illuminating.👍

  • @rlpatrao
    @rlpatrao 11 месяцев назад

    Very good content, short and to the point.

  • @vCtrlAltDelv
    @vCtrlAltDelv Год назад +1

    Super cool. Your tutorials are extremely helpful!

  • @4LXK
    @4LXK Год назад

    The choice of k will affect your results significantly.
    If you want to stay in 2 dimensions, consider using the book's table of contents to approximate what k to chose - a good plot should show less overlap in the clusters.
    Alternatively, most vector stores already provide semantic search in higher dimensions and use cosine instead of the euclidean distance in your example - so you could try getting near_term with weaviate and similiar stores or try hybrid search to get the core concept of each chapter, before summarization.

  • @ozarkexpeditions
    @ozarkexpeditions Год назад

    This is a great video and summary of the various options!

  • @JumpingCow
    @JumpingCow Год назад

    Thank you! Very clear and helpful. I am getting ready to try myself.

    • @DataIndependent
      @DataIndependent  Год назад

      Nice! Please let me know how it goes - I'm curious to see how it does on longer bodies of text in the real world

  • @willjohnson9
    @willjohnson9 Год назад +1

    Great video! Some thoughts.
    I wonder what would happen if you applied your dimensionality reduction process prior to your k-means clustering algorithm. K-means is highly susceptible to the curse of dimensionality (i.e. the more dimensions you add, the more "space" gets added between points so eventually they're all so far that it's hard to justify one point being in the same cluster) and you're working with high-dimensional space. As a result, dimensionality reduction steps are a pretty common pairing with clustering methods like k-means. I'd suggest looking up PCA and NCA (if you haven't tried them yet) as methods to consider prior to k-means to potentially improve your clustering. Also, look up the "elbow method" on choosing an optimal value of 'k' when you do. That'll help you justify choosing a number of clusters to move forward with.
    Last recommendation: alwaysalwaysalways pull a few examples of text that supposedly represents a different cluster (in your case whatever's closest to the centroid). If the model really is interpreting the text the way you suggest (i.e. the intro contains a lot of information that the first few chapters contain), it's always worth poking around and seeing if you see that happening. Thanks for the guide!

    • @toastrecon
      @toastrecon Год назад

      So, you're saying, when he gets to 10:01, he could play with the number of clusters? Too many clusters, and it's almost like overfitting where everything has its own cluster, and too few, and you basically end up with one idea. The elbow would be an inflection point?

  • @amarqueze
    @amarqueze 2 месяца назад

    Amazing video Greg!!!

  • @_ptoni_
    @_ptoni_ Год назад

    Worthy every single minute. Tnks

  • @minty1110
    @minty1110 Год назад

    Nice video. Clear and concise. Well done

  • @Davipar
    @Davipar Год назад

    Awesome content Greg. Appreciate it!

  • @klammer75
    @klammer75 Год назад

    Amazing! Great job!😎🥳🦾

  • @kevon217
    @kevon217 Год назад

    very cool approaches. thanks for sharing!

  • @bioshazard
    @bioshazard Год назад

    KMeans clustering to identify a representative semantic core from a cluster is brilliant... wow... I gotta apply Level 4 to so many datasets now...

  • @GeorgAubele
    @GeorgAubele 7 месяцев назад

    Awesome video! You do a great job!

  • @manujkumarjoshi9342
    @manujkumarjoshi9342 Год назад

    Wow!! the clustering technique was incredible

  • @edusantri5181
    @edusantri5181 26 дней назад

    Wih gacor bang, gratefull 👏👏

  • @EddieDunn2012
    @EddieDunn2012 Год назад +1

    This is awesome! Are there any academic papers about 4 and 5 (o rmaybe 4 combined with 5)? Landmark attention is extending the context windows but token cost ( whether in terms of API calls or GPU memory) is going to be trade-off factor for the foreseeable future.

    • @DataIndependent
      @DataIndependent  Год назад +3

      Someone sent me this one which looks interesting. "HYBRID LONG DOCUMENT SUMMARIZATION USING C2F-FAR AND CHATGPT: A PRACTICAL STUDY"
      I haven't read it, but the problem statement points in this direction I'm told
      arxiv.org/pdf/2306.01169.pdf

    • @EddieDunn2012
      @EddieDunn2012 Год назад +1

      @@DataIndependent Thank you!

  • @toddnedd2138
    @toddnedd2138 Год назад

    Glad to see that other people are also dealing with this very topic. In scientific terminology the name you are probably looking for is "semantic clustering" ;) Have a look at the WEClustering paper by Mehta/Bawa/Singh. Maybe it is also worth give some hierarchical clustering algorithm or DBSCAN algorithm a shot, because the number of clusters is unknown.

    • @DataIndependent
      @DataIndependent  Год назад

      Nice thanks for this. That’s a solid idea and approach.
      I’ll try it out

  • @ehmadzubair
    @ehmadzubair Год назад

    I think you can reduce the batch size in the LLM to mitigate rate limits and timeouts.

    • @DataIndependent
      @DataIndependent  Год назад

      Nice! Thank you Ehmad - yes that is a solid solution

  • @anuranjankumar2904
    @anuranjankumar2904 Год назад

    Awesome video, especially the Vector Clustering approach. I was wondering if you have any reference for this approach, any paper/blog etc?

  • @rammss
    @rammss Год назад

    Thank you, really good video.

  • @michaelyou2008
    @michaelyou2008 Год назад

    In the cell 32, the vectors need to be wrapped by np.array, otherwise, you will get 'list' object has no attribute 'shape' error.

    • @usersoft0822
      @usersoft0822 Год назад

      wrap vector with np.array, but still got ValueError: perplexity must be less than n_samples

  • @keswanikar
    @keswanikar 11 месяцев назад

    Hi Greg, I want to take a moment and thank you for creating a very crisp and clear explanation with examples. Great job here. One quick question, What are your thoughts on applying this technique to code bases of any languages? I am talking about summarizing a codebase (like the book in your example) in a paragraph. Thanks in advance.

    • @DataIndependent
      @DataIndependent  11 месяцев назад +1

      Code is much tougher because the material references other material from across the base.
      You could do a highlight overview.
      Cursor.sh would be the coolest company to tackle this problem

    • @keswanikar
      @keswanikar 11 месяцев назад

      @@DataIndependent Looking into Cursor, seems like a great product. What do you mean by highlight overview, can you please point me to right direction? Seems like youtube chat is too much for this kind of conversation. Thanks in advance. Great stuff!!

  • @kolinkoehl1681
    @kolinkoehl1681 Год назад

    Great video!

  • @ahsanrossi4328
    @ahsanrossi4328 Год назад

    awesome explanation

  • @priyadoesdatascience5141
    @priyadoesdatascience5141 Год назад

    for level 4, is there a way to save the text in the vector store and then pass the whole vector store into a summarization chain? This will reduce the work load

  • @raj345to
    @raj345to Год назад

    absolutely mind blowing and time saving

  • @chaower6958
    @chaower6958 Год назад

    Cannot thank you enough Greg. so much. I encountered a consecutive issue at In[42]: 1st is I have to put vectors = np.array(vectors) before # Perform t-SNE and reduce to 2 dimensions, second is I have to perplexity in the TSEN() less than 2 which is my n_samples; or else I would get error message as 1st AttributeError: 'list' object has no attribute 'shape', and the 2nd perplexity must be less than n_samples, respectively. However, after all that, and run the code successfully, i no longer know if the plot was wirdoly correct.

    • @DataIndependent
      @DataIndependent  Год назад

      Hm, nothing is jumping out to me right now about the problem.
      Have you tried putting the error message in gpt 4 to debug?

    • @chaower6958
      @chaower6958 Год назад

      @@DataIndependent or I’m using google colab to practicing it. Is it why and using my own data. Yes, I asked the bot from MS website. Not sure what it’s called. ;) I solved it by the solutions posted but I could not be sure if the plot I got is correct or not.

  • @MaixPeriyon
    @MaixPeriyon Год назад

    Please consider making videos on the typescript module of langchain as some of the methods do not exist are are named differently it could be a good help

  • @natecodesai
    @natecodesai 6 месяцев назад

    I just reduced summarization time on our Gen AI product to a maximum of 10% and also 10% the cost using BRV on large document! Thank you so much! You have a Patreon?

    • @DataIndependent
      @DataIndependent  6 месяцев назад

      Nice! Glad to hear it. Nope, no Patreon. But you sharing your value is great, thank you!

  • @marcframe7449
    @marcframe7449 Год назад

    For example 4, what would you do if the summarises are too large to fit nicely in the combiner. I'm working on a similar problem, my solution was to chain the summary output to the beginning of the next block of context but that results in many calls to the LLM with full context

    • @DataIndependent
      @DataIndependent  Год назад +1

      You could then do a map reduce on the large container of summaries. Split it up twice

  • @Finalform77
    @Finalform77 Год назад

    Great work man! I did it of a book, worked great. Although the kmeans plot shows some overlapping cluser, i might need to play around with it. I would love to be able to extract information about a particlar topic inside a big document and present a details summary - What does this document tell about topic X? would you say is more relevant to the tallking document code that you made just maybe we a more sophisiticated prompt?

    • @DataIndependent
      @DataIndependent  Год назад +1

      Nice!! That’s great glad it worked out a bit.
      The Kmeans algorithm was the right mix of easy and effective for my use so case so I ran with it. There are a ton more algorithms to try which may be better.
      For that I would lean on a question and answer type of chain instead of summarizing.
      If the topic you want is static, then do a really good “query” which will pull the relevant documents.
      If the topic is dynamic, then look into ways to have the LLM help you bolster your starting query which does the retrieval

  • @tanyongsheng4561
    @tanyongsheng4561 Год назад

    Great video. Thanks, Greg. I am thinking how:
    (1) we could efficiently save the embeddings result into disk so the next time we could load back the data for next use, especially after running FAISS.from_documents(texts, embeddings) and the output is only saved in memory and cannot be retrieved next time I want to use the same vector data ... So that we could continuously grow our vector database
    (2) if we could be able to save the vector data into local disk, the next question is how we could get all vector data from the vector databases (where the output is same as the code: ` vectordb= embeddings.embed_documents([x.page_content for x in docs])`
    Appreciate if we have a video on how to manage and grow the vector database. Thanks.

    • @papasilverhand
      @papasilverhand Год назад

      Hey, you could use something like chroma db to store the embeddings into a disk and then it would load from the disk. It can be done using the persistence attribute

  • @zeelthumar
    @zeelthumar 9 месяцев назад

    Awesome share😊

  • @yasminesmida2585
    @yasminesmida2585 2 месяца назад

    i have used map reduce technique but it's not fast .what should i do?

  • @AntonioVelazquezBustamante
    @AntonioVelazquezBustamante Год назад

    GREAT! GREAT! GREAT! GREAT! GREAT! GREAT! GREAT! GREAT! GREAT! GREAT! GREAT! GREAT!

  • @ohaa3709
    @ohaa3709 Год назад

    Awesome video, thanks! Question: Why do we split the text first using 'RecursiveCharacterTextSplitter' (step A) and then use map-reduce in the 'load_summarize_chain' (step B)? As far as I know map-reduce splits the text itself. Are the reasons that the initial step A puts an upper ceiling on the length of the chunks to ensure that 1) the LLM can handle all chunks within its token limit, and 2) we reduce cost because after pre-selecting with the embeddings the chunks are short enough to cost us less? Thanks for your help Greg

    • @DataIndependent
      @DataIndependent  Год назад +1

      I don't think that map_reduce splits the docs for you. I believe you need to split before
      Either way if you do it before you have more control over how the chunks are made

    • @ohaa3709
      @ohaa3709 Год назад

      @@DataIndependentthanks, can you make a video on why we sometimes exceed the token limit with these approaches even though our chunks are small enough, showing tips and tricks how to work around this? Also on how to save money by using different models for different summarisation tasks? Da Vinci gets quite pricey :D

  • @mohammedaly3144
    @mohammedaly3144 Год назад

    As a lawyer with no programming experience, I see the enormous potential to transform my profession. I am torn between investing the time to understand your videos or alternatively hiring someone--even DI--to build the tools that I would need. If anyone reading this is available for hire, please let me know!

    • @DataIndependent
      @DataIndependent  Год назад

      I have a short list of developers and agencies I send people too if you want to chat about your product.
      If you wanted to up level your skills then learning this would be great but working with a professional will speed boost anything production grade
      You can reach me by twitter dm or contact@dataindependent.com

  • @nogool111
    @nogool111 Год назад

    for the number 4 method, is the number of clusters depend on the length of the docs. If i have 16 docs, will 11 clusters work or i have to choose another number?

  • @redbaron3555
    @redbaron3555 Год назад

    Why do these jupyter notebooks never work for me? Did I miss a requirements file?

  • @KarthikMcyclist
    @KarthikMcyclist Год назад

    Curious to know if you compared performance of curie with gpt35turbo ?

  • @rishu4225
    @rishu4225 Год назад

    Is there any way to control the length of summary returned by mapreduce ?

    • @DataIndependent
      @DataIndependent  Год назад +1

      Totally, make a custom reduce prompt and tell it you want a shorter summary

  • @PonSudhirSajanSivasankaran
    @PonSudhirSajanSivasankaran Год назад

    Awesome Video !...... Is there any Ebook or Course available explains everything about Langchain with NLP???

    • @DataIndependent
      @DataIndependent  Год назад

      What would you want to see in it?

    • @PonSudhirSajanSivasankaran
      @PonSudhirSajanSivasankaran Год назад

      @@DataIndependent I would want to try Named Entity Recognition, Sentence Classification as well as Summarization for Systematic Literature Review of medical articles...

  • @krisszostak4849
    @krisszostak4849 Год назад

    Is there a comunity like discord or sth where people discuss specific use cases and how to achieve them?

  • @cuentadeyoutube5903
    @cuentadeyoutube5903 Год назад

    I would love to see what each method cost was, to get an idea

  • @EigenA
    @EigenA 11 месяцев назад

    Legendary

  • @manasbhattarai4050
    @manasbhattarai4050 14 дней назад

    Hi Greg. Thank you for the video. I have a question I'd like to ask. What if I want to work on another book instead of "intothinair"? I tried to load another pdf but I am getting errors as colab tells me "the book has 0 tokens in it.

    • @DataIndependent
      @DataIndependent  13 дней назад

      It sounds like that is a data ingestion problem, not sure what the issue is specifically

    • @manasbhattarai4050
      @manasbhattarai4050 13 дней назад

      @@DataIndependent i sort of made it work. although my number of documents section outputs 1, because of this i am not able to use clustering. ty for your reply.

  • @JessieJussMessy
    @JessieJussMessy Год назад

    Dang. Nice

  • @TheFreeInt
    @TheFreeInt 11 месяцев назад

    This is genius

  • @krisszostak4849
    @krisszostak4849 Год назад +1

    Additionally to my previous comment, here's the solution, to understand what each cluster represents. we will need to examine the original data points assigned to each cluster. Just to get a deeper dive into what's actually going on there :)
    1. Map Each Sentence/Paragraph to its Cluster: You can create a Python dictionary where the keys are the cluster labels and the values are lists of sentences or paragraphs belonging to that cluster.
    clustered_sentences = {i: [] for i in range(num_clusters)}
    for i, label in enumerate(kmeans.labels_):
    clustered_sentences[label].append(sentences[i]) # assuming `sentences` is your original list of sentences/paragraphs
    2. Examine the Sentences/Paragraphs in Each Cluster: You can then print out or otherwise examine the sentences/paragraphs in each cluster to get a sense of what that cluster represents.
    for cluster, sentences in clustered_sentences.items():
    print(f"Cluster {cluster}:")
    for sentence in sentences:
    print(f" - {sentence}")

  • @MikeZhang-q5h
    @MikeZhang-q5h Год назад

    Hi going through the jupyternotebook, the section on plotting the graph is having an error now saying list object has no attribute shape. I am not too familiar with the vectors used, but something seems to be wrong with the fit_transform function when the data is passed into it. Any solution for that?

    • @DataIndependent
      @DataIndependent  Год назад

      Did you edit the code at all? Haven’t heard of a problem yet.
      Make sure your packages and libraries are updated

    • @MikeZhang-q5h
      @MikeZhang-q5h Год назад

      @@DataIndependent Thanks for the quick reply. But nope i did not change anything, just running everything through and encountering error. Took a screenshot here pasteboard.co/amwRkxPFxJZ8.png

  • @laurayang8893
    @laurayang8893 Год назад

    Can you make a video how to touch up the long document using chatgpt or openai API? My sister owns a counsulting firm and is exploring to touch up various reports which are roughly 50 pages long.

  • @jean-baptistedelabroise5391
    @jean-baptistedelabroise5391 Год назад

    for high dimensionality data I would probably try UMAP instead of Kmeans

    • @DataIndependent
      @DataIndependent  Год назад

      Nice thanks Jean - yeah there is a bunch of good clustering algorithms out there and I went with the quick and easy one after trying a few.

  • @chryselysdemo
    @chryselysdemo Год назад

    @greg how we can we compare two documents

    • @DataIndependent
      @DataIndependent  Год назад

      You can get the summaries, then ask the LLM to compare the two

  • @maged_helmy
    @maged_helmy Год назад

    Hi Greg! I really love you videos. Is it possible to create a tutorial on how to create a bot that finds information online? AKA surfs the internet. Thank you for all the good work!

    • @DataIndependent
      @DataIndependent  Год назад

      Nice! What types of information are you looking for?

  • @leoxiao2751
    @leoxiao2751 День назад

    I love "GPT 42K" :)

  • @firstyfirst
    @firstyfirst Год назад

    ❤‍🔥❤‍🔥❤‍🔥❤‍🔥❤‍🔥❤‍🔥

  • @MajorBuzzKill
    @MajorBuzzKill Год назад

    Im so confused.. i was just tryign to summarize a research paper in 1500 words but it would be faster manually at this point lol.

  • @willyoungs6095
    @willyoungs6095 Год назад

    I love you

  • @MarcelReschke
    @MarcelReschke Год назад

    “Napoleon Bonaparte and Serena Williams both achieved remarkable success in their respective fields…” 😂