UMAP Dimension Reduction, Main Ideas!!!

Поделиться
HTML-код
  • Опубликовано: 22 ноя 2024

Комментарии • 199

  • @statquest
    @statquest  2 года назад +5

    To learn more about Lightning: github.com/PyTorchLightning/pytorch-lightning
    To learn more about Grid: www.grid.ai/
    Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/

  • @codewithbrogs3809
    @codewithbrogs3809 8 месяцев назад +4

    After three days of coming back to this video, I think I finally got it... Thanks Josh. When I'm in a place to support, I will

  • @EthanSalter3
    @EthanSalter3 2 года назад +41

    This is such perfect timing, I'm supposed to learn and perform a UMAP reduction tomorrow. Thank you!

    • @statquest
      @statquest  2 года назад +5

      BAM! :)

    • @Dominus_Ryder
      @Dominus_Ryder 2 года назад +3

      You should buy a couple of songs to really show your appreciation!

  • @aiexplainai2
    @aiexplainai2 2 года назад +8

    I can't appreciate how much this channel helped me - so clearly explained!!

    • @statquest
      @statquest  2 года назад

      Thank you very much! :)

  • @evatosco-herrera8978
    @evatosco-herrera8978 2 года назад +16

    I just found this channel. I'm currently doing my PhD in Bioinformatics and this is helping me immensely to save a lot of time and to learn new methods faster and better (I have a graphical brain so :/) Thank you so much for this!!

    • @statquest
      @statquest  2 года назад +1

      Good luck with your PhD! :)

  • @kennethm.4998
    @kennethm.4998 2 года назад +2

    Dude... Dude... You have a gift for explaining stats. Superb.

  • @JulietNovember9
    @JulietNovember9 2 года назад +2

    New StatQuest always gets me amped. High yield, low drag material!!!

  • @abramcadabros1755
    @abramcadabros1755 2 года назад +4

    Wowie, I can finally learn what UMAP stands for and how it reduces dimensionality AFTER I analysed my scRNA-seq data with it's help!

  • @terezamiklosova104
    @terezamiklosova104 2 года назад +11

    I really appreciated the UMAP vs t-SNE part. Thanks for the video! Really helpful when one tries to get the main idea behind all the math :)

    • @statquest
      @statquest  2 года назад

      Thank you very much! :)

    • @smallnon-codingrnabioinfor3792
      @smallnon-codingrnabioinfor3792 Год назад

      I totally agree! The part starting at 16'10 is worth to look at back! Thanks a lot for this great and simple explanation!

  • @SuebpongPruttipattanapong
    @SuebpongPruttipattanapong 4 дня назад +1

    Thank you so much. you save a lot of time for me to understand UMAP.
    I also eager you to explain more others dimension reduction topic too! (Hope one day, the PacMap and Trimap will be get selected to explained on channel or maybe not)

    • @statquest
      @statquest  4 дня назад

      Thank you and I'll keep those topics in mind!

  • @offswitcher3159
    @offswitcher3159 2 года назад +1

    Great Video, Thank you! You are with me since first semester and I am so happy to see a video by you on a topic that is relevant to me

  • @markmalkowski3695
    @markmalkowski3695 2 года назад +7

    This is awesome, thanks for explaining UMAP so well, and clearly explaining when to use! Love the topics you’re covering

  • @VCC1316
    @VCC1316 2 года назад +1

    I'd love to see a cross-over episode between StatQuest and Casually Explained.
    Big bada-bam.

  • @akashkewar
    @akashkewar 2 года назад +2

    Not sure if I can hold my breath for long enough before the video starts, Amazing work!! @StatQuest

  • @user-hg4jk2q
    @user-hg4jk2q 6 месяцев назад +1

    This will help me greatly for my MS project.

  • @dexterdev
    @dexterdev 2 года назад +2

    I was waiting for this. thank you. best dimensionally reduced visual explanation out there.

    • @statquest
      @statquest  2 года назад +1

      Thank you very much! :)

  • @narekatsyy
    @narekatsyy 8 дней назад +1

    GOATED channel

  • @rajanalexander4949
    @rajanalexander4949 8 месяцев назад +1

    Great video; especially liked the echo on the full exposition of 'UMAP' 😂

  • @emiyake
    @emiyake 9 месяцев назад +1

    PaCMAP dimension reduction explanation video would be very appreciated!

    • @statquest
      @statquest  8 месяцев назад +1

      I'll keep that in mind.

  • @dataanalyticswithmichael8931
    @dataanalyticswithmichael8931 2 года назад +1

    Nice esplanation, i want to use this as my references for my projects

  • @saberkazeminasab6142
    @saberkazeminasab6142 Год назад +1

    Thanks so much for the great presentation!

  • @AmandaEstevamCarvalho
    @AmandaEstevamCarvalho 7 месяцев назад

    Ele explica como se eu fosse uma acéfala.
    Só assim eu entendi, obrigada!

    • @statquest
      @statquest  7 месяцев назад

      Muito obrigado! :)

  • @shubhamtalks9718
    @shubhamtalks9718 2 года назад +2

    Yayy. I was waiting for it.

  • @brucewayne6744
    @brucewayne6744 2 года назад +3

    Amazing video!! Hope there is a statquest on ICA coming soon :)

  • @whitelady1063
    @whitelady1063 2 года назад +1

    Best comment section in RUclips
    Also now I get why people on office won't stop praising you
    BAM!

  • @gergerger53
    @gergerger53 2 года назад +1

    Great video (as always). You might want to calm it down with the BAMs though. It used to be quirky and fun but having them literally every minute or two is a bit much and forced. Your video creation skills are seriously awesome. I wish I had even half your skills at making these concepts accessible for the YT audience. 👏

  • @danli1863
    @danli1863 2 года назад +1

    I must say this channel is amazing! I must say this channel is amazing! I must say this channel is amazing!
    Important things 3 times. :)

  • @kiranchowdary8100
    @kiranchowdary8100 2 года назад +1

    ROCKINGGGG!!!! As always.

  • @agentgunnso
    @agentgunnso 6 месяцев назад +1

    Thank you so much!!! Love the sound effects and the jokes

    • @statquest
      @statquest  6 месяцев назад +1

      Glad you like them!

  • @Littlemu22y
    @Littlemu22y 2 месяца назад

    your videos are fantastic

  • @MegaNightdude
    @MegaNightdude 2 года назад +1

    Great content. As always!

  • @THEMATT222
    @THEMATT222 2 года назад +3

    New video!!!! Very Noice 👍

  • @sumangare1804
    @sumangare1804 Год назад +1

    Thank your the explanation! If possible Could you do a video on HDBSCAN algorithm

  • @abdoualgerian5396
    @abdoualgerian5396 Год назад

    With this amazing explanation way, please consider doing a Deep TDA quest starting with the paraparapepapara funny thing instead of the songs

  • @davidhodson6680
    @davidhodson6680 Год назад +1

    Adding a comment for the cheery ukelele song at the start, I like it.

  • @RelaxingSerbian
    @RelaxingSerbian 2 года назад +1

    Your little intros are so silly and charming! ^_^

  • @floopybits8037
    @floopybits8037 2 года назад +1

    Thank you so much for this video

  • @siphosakhemkhwanazi6042
    @siphosakhemkhwanazi6042 6 месяцев назад +1

    The intro made me to subscribe😂😂

  • @samuelivannoya267
    @samuelivannoya267 2 года назад +1

    You are amazing!! Thanks!!!

  • @MinsangKim-n1z
    @MinsangKim-n1z 3 месяца назад +1

    Hello Josh, thank you so much for the amazing video! I have a question about the mapping consistency of UMAP.
    In the video, UMAP can keep mapping consistency (meaning that the mapping does not change over the iteration) when we map the projected points on low-dimensional plane based on high-dimensional similarity score, unlike to t-SNE. My question is, it doesn't necessarily mean the final visualization result would be consistent for all time, right? Because since there is randomized sampling, I don't think the final result would be consistent. I tried it using umap-learn lib and the result was also inconsistent.
    I'm not sure I explained well on my question but please feel free to tell me if there's any ambiguous points. Thank you and have a nice day :)

    • @statquest
      @statquest  3 месяца назад

      The only way to get the exact same graph every time is to set the random seed right before you use UMAP. Although it has less randomness than t-SNE, it still has some randomness.

  • @Pedritox0953
    @Pedritox0953 Год назад +1

    Great video!

  • @meenak722
    @meenak722 Год назад +1

    Thank you very much!

  • @spenmop
    @spenmop 5 месяцев назад

    Your videos are awesome! Makes things so much clearer! But I have a couple of questions:
    How do you handle the situation where a point has many identical points (ie. high-dim distance = 0)? How to calculate sigma_i? For example, if k = 10, but 7-8 of the neighbours are duplicates with Dij = 0, then sigma_i is undefined. Do I de-duplicate the data first and then add it back in at the end?
    And symmetrizing: Wij' = Wji' = Wij + Wji - Wij x Wji, yes? But aren't Wij and Wji only calculated for neighbours of i and j? What happens if Wij exists, but Wji does not? Do I add i as another neighbour of j's? (but then j would have more than k neighbours) I'm so confused.

    • @statquest
      @statquest  5 месяцев назад +1

      To be honest, I would just try UMAP out and see what it does. It could treat duplicate points as a single point or do something else.

  • @rajankandel8354
    @rajankandel8354 2 месяца назад +1

    12:44 why does UMAP decides to move point e farther from b? Is it because similarity score is zero

    • @statquest
      @statquest  2 месяца назад

      At 12:44 we move 'b' further from 'e' because they were in different clusters in the high dimensional space.

  • @nbent4607
    @nbent4607 9 месяцев назад +1

    Thank you!!

    • @statquest
      @statquest  9 месяцев назад

      You're welcome!

  • @franziskakaeppler5602
    @franziskakaeppler5602 4 месяца назад +1

    Thank you for this great video. I have a question at 8:21min: Why are the similarity scores 1,0 an 0,6? Could they as well be e.g. 0,9 and 0,7?

    • @statquest
      @statquest  4 месяца назад +3

      I'm sorry for the confusion. There's an important detail that I should have included in this video, and not just the follow up that shows the mathematical details ( ruclips.net/video/jth4kEvJ3P8/видео.html ): the nearest point always has a similarity score of 1.

    • @franziskakaeppler5602
      @franziskakaeppler5602 4 месяца назад +1

      Thank you:)

    • @muriloaraujosouza462
      @muriloaraujosouza462 Месяц назад +1

      I was wondering the same thing! Thanks for answering Josh, you are great!

  • @jatin1995
    @jatin1995 2 года назад +1

    Perfect!

  • @AkashKumar-qe5jk
    @AkashKumar-qe5jk 2 года назад

    Great video!!!
    One query: What characteristics of the features/dataset we would be analyzing when we choose a smaller value of neighbors? Same question with larger values?

    • @statquest
      @statquest  2 года назад

      The number of nearest neighbors we use does not affect how the features are used. The features are all used equally no matter what.

  • @cytfvvytfvyggvryd
    @cytfvvytfvyggvryd 2 года назад

    Thank you for your terrific video! If you got time, could you made a relevant video about densMAP? Again appreciate your wonderful work! Thank you!

    • @statquest
      @statquest  2 года назад

      I'll keep that in mind.

  • @paulclarke4548
    @paulclarke4548 2 года назад +1

    Great video! Thank you!! Do you have any plans to clearly explain Generative Topographic Mapping (GTM)? I'd love that!

    • @statquest
      @statquest  2 года назад

      Not right now, but I'll keep it in mind.

  • @AU-hs6zw
    @AU-hs6zw 2 года назад +1

    Thanks!

  • @92marjoh
    @92marjoh 2 года назад

    Hey Josh,
    Your videos have made my learning curve exponential and i truly appreciate the videos you make! I wonder, have you ever considered making a video about Bayesian target encoding (and other smart categorical encoders)?

    • @statquest
      @statquest  2 года назад

      I'll keep that in mind.

  • @veronicacastaneda6274
    @veronicacastaneda6274 2 года назад +3

    Hey! I love your videos! Can you do one on Weighted correlation network analysis? I share your videos with my friends and we want to learn about it :)

    • @statquest
      @statquest  2 года назад +1

      I'll keep tat in mind.

  • @muriloaraujosouza462
    @muriloaraujosouza462 Месяц назад

    Hello again, and thanks for the awesome video once more!
    I have one question... where does the log2(k=num.neighbors) comes from? I mean, why log2(k)? and not log3(k) or log10(k) or ln(k)?

    • @statquest
      @statquest  Месяц назад

      That's a good question. Generally speaking, the decision is often arbitrary. Usually people pick log base 'e' because it has an easy derivative, but in this case, I have no idea what the motivation was.

  • @ashfaqueazad3897
    @ashfaqueazad3897 2 года назад

    It will be great if you do some videos on sparse data if you get the time. Would love it. Thanks.

    • @statquest
      @statquest  2 года назад

      I'll keep that in mind.

  • @wlyang8787
    @wlyang8787 Год назад

    Hi Josh, would you please make a video about DiffusionMap? Thank you very much!

  • @rafayelkosyan9301
    @rafayelkosyan9301 7 дней назад

    I would like to understand if the process of making the similarity coefficient symmetric is correct. AC=(0.6+06)/2=0.6 and BC=(0.6+1)/2 = 0.8 I think

    • @statquest
      @statquest  7 дней назад

      At 10:21 I say that UMAP uses a method that is similar to taking the average, but it's not the same as taking the average. So your numbers are not correct. To learn about the difference, see the follow up video: ruclips.net/video/jth4kEvJ3P8/видео.html

  • @grace6228j
    @grace6228j 2 года назад +2

    Thanks for your amazing video! I am a little bit confused, it seems that UMAP is able to do clustering (based on the similarity scores) and dimensionality reduction visualization at the same time, why do researchers usually only use UMAP for visualization?

    • @statquest
      @statquest  2 года назад +1

      That's a great question. I guess the big difference between UMAP and a clustering algorithm is that usually a clustering algorithm gives you a metric to determine how good or bad the clustering is. For example, with k-means clustering, we can compare the total variation in the data for each value for 'k'. In contrast, I'm not sure we can do that with UMAP.

  • @lamourpaspourmoi
    @lamourpaspourmoi Год назад

    Thank you! Could you do one with self organizing maps?

    • @statquest
      @statquest  Год назад +1

      I'll keep that in mind.

  • @hiankun
    @hiankun 2 года назад +1

    The big picture is ❤️
    😃

    • @statquest
      @statquest  2 года назад +1

      You got it! BAM! :)

  • @LazzaroMan
    @LazzaroMan 2 года назад +1

    Love you

  • @andreamanfron3199
    @andreamanfron3199 2 года назад +1

    i just love you

  • @cssensei610
    @cssensei610 2 года назад +1

    can you cover Locality Sensitive Hashing, and do a clustering implementation in PySpark

    • @statquest
      @statquest  2 года назад

      I'll keep that in mind.

  • @mericknal8752
    @mericknal8752 Год назад +1

    echoing UMAP part is amazing 😂

  • @Friedrich713
    @Friedrich713 2 года назад +1

    Great quest, Josh! First time I noticed the fuzzy parts on the circles and arrows. What tool are you using to make the slides? Looks damn fine!

    • @statquest
      @statquest  2 года назад

      Thanks! I draw everything in keynote.

  • @ranjit9427
    @ranjit9427 2 года назад +2

    Can you make some videos on recommender systems??

    • @4wanys
      @4wanys 2 года назад

      complete list for recommender systems
      ruclips.net/p/PLsugXK9b1w1nlDH0rbxIufJLeC3MsbRaa

    • @statquest
      @statquest  2 года назад +1

      I hope too soon!

  • @alexlee3511
    @alexlee3511 8 месяцев назад

    Complicated dataset you referring to is the dataset that cannot be explained by one or two PC?

  • @indolizacja9829
    @indolizacja9829 3 месяца назад

    Have you considered comparing UMAP and Concordex? :)

  • @pranilpatil4109
    @pranilpatil4109 4 месяца назад

    But we can we seperate those clusters? We need cluster centroids for that.

    • @statquest
      @statquest  4 месяца назад +1

      UMAP isn't a clustering method, it's a dimension reduction method. If you want to find clusters, try DBSCAN: ruclips.net/video/RDZUdRSDOok/видео.html

  • @prashantsharma-sr5dl
    @prashantsharma-sr5dl 8 месяцев назад

    how did the low dimensional plot came just after the similarity score?

    • @statquest
      @statquest  8 месяцев назад

      At 4:14 I talk about how the main idea is that we start with an initial (somewhat random) low dimensional plot that we then optimize based on the high dimensional similarity scores.

  • @Dominus_Ryder
    @Dominus_Ryder 2 года назад

    StatQuest please do a UMAP tutorial in R next!

    • @statquest
      @statquest  2 года назад

      I'll keep that in mind. However, I'm doing the mathematical details next.

  • @leamon9024
    @leamon9024 2 года назад

    Hello sir, would you cover a dimension reduction technique which uses hierarchical or k-means clustering if possible?
    Thanks in advance.

    • @statquest
      @statquest  2 года назад

      I'll keep that in mind.

  • @critical-chris
    @critical-chris 10 дней назад

    When you explain UMAP in terms of preserving clusters, it makes it sound like UMAP is performing a cluster analysis under the hood. Is my understanding correct, when I interpret your use of clusters in the video as a didactic "trick" rather than UMAP actually doing cluster analysis? (Because otherwise, why would we use UMAP to reduce dimensions before doing a cluster analysis, using HDBSCAN or whatever)?

    • @statquest
      @statquest  9 дней назад

      One of the most important parameters you can set for UMAP is the number of high-dimensional neighbors you want each point to have (see 7:15 ). So, in that sense, you control how high-dimensional clusters are identified even though there is no explicit clustering algorithm involved.

    • @critical-chris
      @critical-chris 9 дней назад

      ​@@statquest I suppose the difference between UMAP's high-dimensional neigbours and clusters (as commonly understood) is that the high-dimensional neighbours are "ego-centric clusters" (if that makes any sense), i.e. each point has it's own "cluster" of nearest neigbours. Or am I misunderstanding things when I assume that if we set num.neighbors to 4 instead of 3, E would (or could) become part of C's "neigborhood cluster", even though E clearly belongs to a different cluster (properly understood) than C?
      🤔

    • @statquest
      @statquest  9 дней назад

      @@critical-chris Yep.

    • @critical-chris
      @critical-chris 9 дней назад +1

      @@statquest thanks for confirming. This helps me wrap my head around UMAP. Next thing will be to figure out what that ”magic” curve is and how it changes based on the number of neighbors you select. I suppose I’ll find that in the mathematical details video… :-)

    • @statquest
      @statquest  9 дней назад

      @@critical-chris yep! :)

  • @flc4eva
    @flc4eva 2 года назад

    I might have missed this, but how does UMAP initializes a low-dimensional graph? Is it randomized as done in tSNE?

    • @statquest
      @statquest  2 года назад

      This is answered at 16:43

  • @ammararazzaq132
    @ammararazzaq132 2 года назад

    As PCA required correlation between features to find new principal components, does UMAP approach require correlation between features to project data onto lower dimensional space?

    • @statquest
      @statquest  2 года назад

      no

    • @ammararazzaq132
      @ammararazzaq132 2 года назад

      @@statquest So we can still see clusters even when data is not correlated?

    • @statquest
      @statquest  2 года назад

      @@ammararazzaq132 That I don't know. All I know is that UMAP does not assume correlations.

    • @ammararazzaq132
      @ammararazzaq132 2 года назад

      @@statquest Okay thankyou. I will look into it a bit more.

  • @joejohnoptimus
    @joejohnoptimus 7 месяцев назад

    How does UMAP identify these initial clusters to begin with?

    • @statquest
      @statquest  7 месяцев назад

      You specify the number of neighbors. I talk about this at various times, but 17:18 would be a good review.

  • @김광우-w8m
    @김광우-w8m 2 года назад

    I have a question. After moving d closer to e, do we still consider moving d to c? Or, would c be moved to d? The direction in the video confuses me.

    • @statquest
      @statquest  2 года назад

      When we move 'd', we consider both 'e' and 'c' at the same time. In this case, moving 'd' closer to 'e' and closer to 'c' will increase the neighbor score for 'e' a lot but only increase the score for 'c' a little, so we will move 'd'. For details, see: ruclips.net/video/jth4kEvJ3P8/видео.html

  • @rajankandel8354
    @rajankandel8354 2 месяца назад

    13:27 how do you derive t distribution fit

    • @statquest
      @statquest  2 месяца назад

      That question, and other details, are answered in the "details" video: ruclips.net/video/jth4kEvJ3P8/видео.html

  • @ali-om4uv
    @ali-om4uv 2 года назад

    How does umap know which high dimensional datapoint belongs to which cluster?

    • @statquest
      @statquest  2 года назад

      The similarity scores.

  • @Chattepliee
    @Chattepliee 2 года назад

    I've read that UMAP is better at preserving inter-cluster distance information relative to tSNE, what do you think? Is it reasonable to infer relationships between clusters on a UMAP graph? I try to avoid doing so with tSNE.

    • @statquest
      @statquest  2 года назад +1

      To be honest, it probably depends on how you configure the n_neighbors parameter. However, to get a better sense of the differences (and similarities) between UMAP and t-SNE, see the follow up video: ruclips.net/video/jth4kEvJ3P8/видео.html

    • @samggfr
      @samggfr 2 года назад

      Concerning distance information, initialization and parameters are important. Read "The art of using t-SNE for single-cell transcriptomics" pubmed.ncbi.nlm.nih.gov/31780648/ and "Initialization is critical for preserving global data structure in both t-SNE and UMAP" dkobak.github.io/pdfs/kobak2021initialization.pdf

  • @TheEbbemonster
    @TheEbbemonster 2 года назад

    Seems very convoluted compared to K-means or hclust.

    • @statquest
      @statquest  2 года назад

      UMAP uses a weighted clustering method, so that points that are closer together in high-dimensional space will get higher priority to be put close together in the low dimensional space.

  • @juanete69
    @juanete69 2 года назад

    But how do you "decide" that a cluster is a distant cluster?
    PS: I guess you consider a point as a distant point if it's not among the k neighbors.

    • @statquest
      @statquest  2 года назад

      correct

    • @juanete69
      @juanete69 2 года назад

      @@statquest But do you keep "adding" new points to the cluster if they are within the k neighbors of the next point, and so on?
      Or in order to define the cluster you only consider the k neighbors of the first point?

    • @statquest
      @statquest  2 года назад +1

      @@juanete69 We start with a single point. If it has k neighbors, we call it a cluster and the neighbors to the cluster. Then, for each neighbor that has k neighbors, we add those neighbors and repeat until the cluster is surrounded by points that have fewer than k neighbors.

  • @gama3181
    @gama3181 2 года назад +1

    Hi-dimentional BAAAMM!

  • @AHMADKELIX
    @AHMADKELIX Год назад +1

    Permissionntomlearn sir

  • @TJ-hs1qm
    @TJ-hs1qm 2 года назад +1

    auto-like 👍

  • @sapito169
    @sapito169 2 года назад

    i think he will sing all the video XD

  • @connorfrankston5548
    @connorfrankston5548 2 года назад

    Thanks, I appreciate the information. However, I think your videos would be easier to watch with a reduction of the "bam" dimension.

  • @ScottSummerill
    @ScottSummerill 2 года назад +1

    UMAP is a MESS. No thank you.

  • @dummybro499
    @dummybro499 2 года назад +4

    Don't say bam....!! It irritates