To learn more about Lightning: github.com/PyTorchLightning/pytorch-lightning To learn more about Grid: www.grid.ai/ Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/
I just found this channel. I'm currently doing my PhD in Bioinformatics and this is helping me immensely to save a lot of time and to learn new methods faster and better (I have a graphical brain so :/) Thank you so much for this!!
Thank you so much. you save a lot of time for me to understand UMAP. I also eager you to explain more others dimension reduction topic too! (Hope one day, the PacMap and Trimap will be get selected to explained on channel or maybe not)
Great video (as always). You might want to calm it down with the BAMs though. It used to be quirky and fun but having them literally every minute or two is a bit much and forced. Your video creation skills are seriously awesome. I wish I had even half your skills at making these concepts accessible for the YT audience. 👏
Hello Josh, thank you so much for the amazing video! I have a question about the mapping consistency of UMAP. In the video, UMAP can keep mapping consistency (meaning that the mapping does not change over the iteration) when we map the projected points on low-dimensional plane based on high-dimensional similarity score, unlike to t-SNE. My question is, it doesn't necessarily mean the final visualization result would be consistent for all time, right? Because since there is randomized sampling, I don't think the final result would be consistent. I tried it using umap-learn lib and the result was also inconsistent. I'm not sure I explained well on my question but please feel free to tell me if there's any ambiguous points. Thank you and have a nice day :)
The only way to get the exact same graph every time is to set the random seed right before you use UMAP. Although it has less randomness than t-SNE, it still has some randomness.
Your videos are awesome! Makes things so much clearer! But I have a couple of questions: How do you handle the situation where a point has many identical points (ie. high-dim distance = 0)? How to calculate sigma_i? For example, if k = 10, but 7-8 of the neighbours are duplicates with Dij = 0, then sigma_i is undefined. Do I de-duplicate the data first and then add it back in at the end? And symmetrizing: Wij' = Wji' = Wij + Wji - Wij x Wji, yes? But aren't Wij and Wji only calculated for neighbours of i and j? What happens if Wij exists, but Wji does not? Do I add i as another neighbour of j's? (but then j would have more than k neighbours) I'm so confused.
I'm sorry for the confusion. There's an important detail that I should have included in this video, and not just the follow up that shows the mathematical details ( ruclips.net/video/jth4kEvJ3P8/видео.html ): the nearest point always has a similarity score of 1.
Great video!!! One query: What characteristics of the features/dataset we would be analyzing when we choose a smaller value of neighbors? Same question with larger values?
Hey Josh, Your videos have made my learning curve exponential and i truly appreciate the videos you make! I wonder, have you ever considered making a video about Bayesian target encoding (and other smart categorical encoders)?
Hello again, and thanks for the awesome video once more! I have one question... where does the log2(k=num.neighbors) comes from? I mean, why log2(k)? and not log3(k) or log10(k) or ln(k)?
That's a good question. Generally speaking, the decision is often arbitrary. Usually people pick log base 'e' because it has an easy derivative, but in this case, I have no idea what the motivation was.
At 10:21 I say that UMAP uses a method that is similar to taking the average, but it's not the same as taking the average. So your numbers are not correct. To learn about the difference, see the follow up video: ruclips.net/video/jth4kEvJ3P8/видео.html
Thanks for your amazing video! I am a little bit confused, it seems that UMAP is able to do clustering (based on the similarity scores) and dimensionality reduction visualization at the same time, why do researchers usually only use UMAP for visualization?
That's a great question. I guess the big difference between UMAP and a clustering algorithm is that usually a clustering algorithm gives you a metric to determine how good or bad the clustering is. For example, with k-means clustering, we can compare the total variation in the data for each value for 'k'. In contrast, I'm not sure we can do that with UMAP.
At 4:14 I talk about how the main idea is that we start with an initial (somewhat random) low dimensional plot that we then optimize based on the high dimensional similarity scores.
When you explain UMAP in terms of preserving clusters, it makes it sound like UMAP is performing a cluster analysis under the hood. Is my understanding correct, when I interpret your use of clusters in the video as a didactic "trick" rather than UMAP actually doing cluster analysis? (Because otherwise, why would we use UMAP to reduce dimensions before doing a cluster analysis, using HDBSCAN or whatever)?
One of the most important parameters you can set for UMAP is the number of high-dimensional neighbors you want each point to have (see 7:15 ). So, in that sense, you control how high-dimensional clusters are identified even though there is no explicit clustering algorithm involved.
@@statquest I suppose the difference between UMAP's high-dimensional neigbours and clusters (as commonly understood) is that the high-dimensional neighbours are "ego-centric clusters" (if that makes any sense), i.e. each point has it's own "cluster" of nearest neigbours. Or am I misunderstanding things when I assume that if we set num.neighbors to 4 instead of 3, E would (or could) become part of C's "neigborhood cluster", even though E clearly belongs to a different cluster (properly understood) than C? 🤔
@@statquest thanks for confirming. This helps me wrap my head around UMAP. Next thing will be to figure out what that ”magic” curve is and how it changes based on the number of neighbors you select. I suppose I’ll find that in the mathematical details video… :-)
As PCA required correlation between features to find new principal components, does UMAP approach require correlation between features to project data onto lower dimensional space?
When we move 'd', we consider both 'e' and 'c' at the same time. In this case, moving 'd' closer to 'e' and closer to 'c' will increase the neighbor score for 'e' a lot but only increase the score for 'c' a little, so we will move 'd'. For details, see: ruclips.net/video/jth4kEvJ3P8/видео.html
I've read that UMAP is better at preserving inter-cluster distance information relative to tSNE, what do you think? Is it reasonable to infer relationships between clusters on a UMAP graph? I try to avoid doing so with tSNE.
To be honest, it probably depends on how you configure the n_neighbors parameter. However, to get a better sense of the differences (and similarities) between UMAP and t-SNE, see the follow up video: ruclips.net/video/jth4kEvJ3P8/видео.html
Concerning distance information, initialization and parameters are important. Read "The art of using t-SNE for single-cell transcriptomics" pubmed.ncbi.nlm.nih.gov/31780648/ and "Initialization is critical for preserving global data structure in both t-SNE and UMAP" dkobak.github.io/pdfs/kobak2021initialization.pdf
UMAP uses a weighted clustering method, so that points that are closer together in high-dimensional space will get higher priority to be put close together in the low dimensional space.
@@statquest But do you keep "adding" new points to the cluster if they are within the k neighbors of the next point, and so on? Or in order to define the cluster you only consider the k neighbors of the first point?
@@juanete69 We start with a single point. If it has k neighbors, we call it a cluster and the neighbors to the cluster. Then, for each neighbor that has k neighbors, we add those neighbors and repeat until the cluster is surrounded by points that have fewer than k neighbors.
To learn more about Lightning: github.com/PyTorchLightning/pytorch-lightning
To learn more about Grid: www.grid.ai/
Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/
After three days of coming back to this video, I think I finally got it... Thanks Josh. When I'm in a place to support, I will
Bam!
DOUBLE BAM
This is such perfect timing, I'm supposed to learn and perform a UMAP reduction tomorrow. Thank you!
BAM! :)
You should buy a couple of songs to really show your appreciation!
I can't appreciate how much this channel helped me - so clearly explained!!
Thank you very much! :)
I just found this channel. I'm currently doing my PhD in Bioinformatics and this is helping me immensely to save a lot of time and to learn new methods faster and better (I have a graphical brain so :/) Thank you so much for this!!
Good luck with your PhD! :)
Dude... Dude... You have a gift for explaining stats. Superb.
Thank you!
New StatQuest always gets me amped. High yield, low drag material!!!
Awesome!!!
Wowie, I can finally learn what UMAP stands for and how it reduces dimensionality AFTER I analysed my scRNA-seq data with it's help!
BAM!
I really appreciated the UMAP vs t-SNE part. Thanks for the video! Really helpful when one tries to get the main idea behind all the math :)
Thank you very much! :)
I totally agree! The part starting at 16'10 is worth to look at back! Thanks a lot for this great and simple explanation!
Thank you so much. you save a lot of time for me to understand UMAP.
I also eager you to explain more others dimension reduction topic too! (Hope one day, the PacMap and Trimap will be get selected to explained on channel or maybe not)
Thank you and I'll keep those topics in mind!
Great Video, Thank you! You are with me since first semester and I am so happy to see a video by you on a topic that is relevant to me
Awesome!
This is awesome, thanks for explaining UMAP so well, and clearly explaining when to use! Love the topics you’re covering
Thank you!
I'd love to see a cross-over episode between StatQuest and Casually Explained.
Big bada-bam.
:)
Not sure if I can hold my breath for long enough before the video starts, Amazing work!! @StatQuest
Thanks!!
This will help me greatly for my MS project.
Good luck!
I was waiting for this. thank you. best dimensionally reduced visual explanation out there.
Thank you very much! :)
GOATED channel
bam! :)
Great video; especially liked the echo on the full exposition of 'UMAP' 😂
:)
PaCMAP dimension reduction explanation video would be very appreciated!
I'll keep that in mind.
Nice esplanation, i want to use this as my references for my projects
Bam! :)
Thanks so much for the great presentation!
Glad you enjoyed it!
Ele explica como se eu fosse uma acéfala.
Só assim eu entendi, obrigada!
Muito obrigado! :)
Yayy. I was waiting for it.
bam!
Amazing video!! Hope there is a statquest on ICA coming soon :)
One day...
Best comment section in RUclips
Also now I get why people on office won't stop praising you
BAM!
Thank you! :)
Great video (as always). You might want to calm it down with the BAMs though. It used to be quirky and fun but having them literally every minute or two is a bit much and forced. Your video creation skills are seriously awesome. I wish I had even half your skills at making these concepts accessible for the YT audience. 👏
Noted
I must say this channel is amazing! I must say this channel is amazing! I must say this channel is amazing!
Important things 3 times. :)
TRIPLE BAM! :)
ROCKINGGGG!!!! As always.
Thanks!
Thank you so much!!! Love the sound effects and the jokes
Glad you like them!
your videos are fantastic
Great content. As always!
Thank you!
New video!!!! Very Noice 👍
BAM!!!
Thank your the explanation! If possible Could you do a video on HDBSCAN algorithm
I'll keep that in mind.
With this amazing explanation way, please consider doing a Deep TDA quest starting with the paraparapepapara funny thing instead of the songs
Noted
Adding a comment for the cheery ukelele song at the start, I like it.
Thank you! :)
Your little intros are so silly and charming! ^_^
Thank you!
Thank you so much for this video
Most welcome 😊!
The intro made me to subscribe😂😂
bam! :)
You are amazing!! Thanks!!!
Thank you!
Hello Josh, thank you so much for the amazing video! I have a question about the mapping consistency of UMAP.
In the video, UMAP can keep mapping consistency (meaning that the mapping does not change over the iteration) when we map the projected points on low-dimensional plane based on high-dimensional similarity score, unlike to t-SNE. My question is, it doesn't necessarily mean the final visualization result would be consistent for all time, right? Because since there is randomized sampling, I don't think the final result would be consistent. I tried it using umap-learn lib and the result was also inconsistent.
I'm not sure I explained well on my question but please feel free to tell me if there's any ambiguous points. Thank you and have a nice day :)
The only way to get the exact same graph every time is to set the random seed right before you use UMAP. Although it has less randomness than t-SNE, it still has some randomness.
Great video!
Thanks!
Thank you very much!
You're welcome!
Your videos are awesome! Makes things so much clearer! But I have a couple of questions:
How do you handle the situation where a point has many identical points (ie. high-dim distance = 0)? How to calculate sigma_i? For example, if k = 10, but 7-8 of the neighbours are duplicates with Dij = 0, then sigma_i is undefined. Do I de-duplicate the data first and then add it back in at the end?
And symmetrizing: Wij' = Wji' = Wij + Wji - Wij x Wji, yes? But aren't Wij and Wji only calculated for neighbours of i and j? What happens if Wij exists, but Wji does not? Do I add i as another neighbour of j's? (but then j would have more than k neighbours) I'm so confused.
To be honest, I would just try UMAP out and see what it does. It could treat duplicate points as a single point or do something else.
12:44 why does UMAP decides to move point e farther from b? Is it because similarity score is zero
At 12:44 we move 'b' further from 'e' because they were in different clusters in the high dimensional space.
Thank you!!
You're welcome!
Thank you for this great video. I have a question at 8:21min: Why are the similarity scores 1,0 an 0,6? Could they as well be e.g. 0,9 and 0,7?
I'm sorry for the confusion. There's an important detail that I should have included in this video, and not just the follow up that shows the mathematical details ( ruclips.net/video/jth4kEvJ3P8/видео.html ): the nearest point always has a similarity score of 1.
Thank you:)
I was wondering the same thing! Thanks for answering Josh, you are great!
Perfect!
Thank you!
Great video!!!
One query: What characteristics of the features/dataset we would be analyzing when we choose a smaller value of neighbors? Same question with larger values?
The number of nearest neighbors we use does not affect how the features are used. The features are all used equally no matter what.
Thank you for your terrific video! If you got time, could you made a relevant video about densMAP? Again appreciate your wonderful work! Thank you!
I'll keep that in mind.
Great video! Thank you!! Do you have any plans to clearly explain Generative Topographic Mapping (GTM)? I'd love that!
Not right now, but I'll keep it in mind.
Thanks!
bam! :)
Hey Josh,
Your videos have made my learning curve exponential and i truly appreciate the videos you make! I wonder, have you ever considered making a video about Bayesian target encoding (and other smart categorical encoders)?
I'll keep that in mind.
Hey! I love your videos! Can you do one on Weighted correlation network analysis? I share your videos with my friends and we want to learn about it :)
I'll keep tat in mind.
Hello again, and thanks for the awesome video once more!
I have one question... where does the log2(k=num.neighbors) comes from? I mean, why log2(k)? and not log3(k) or log10(k) or ln(k)?
That's a good question. Generally speaking, the decision is often arbitrary. Usually people pick log base 'e' because it has an easy derivative, but in this case, I have no idea what the motivation was.
It will be great if you do some videos on sparse data if you get the time. Would love it. Thanks.
I'll keep that in mind.
Hi Josh, would you please make a video about DiffusionMap? Thank you very much!
I'll keep that in mind.
I would like to understand if the process of making the similarity coefficient symmetric is correct. AC=(0.6+06)/2=0.6 and BC=(0.6+1)/2 = 0.8 I think
At 10:21 I say that UMAP uses a method that is similar to taking the average, but it's not the same as taking the average. So your numbers are not correct. To learn about the difference, see the follow up video: ruclips.net/video/jth4kEvJ3P8/видео.html
Thanks for your amazing video! I am a little bit confused, it seems that UMAP is able to do clustering (based on the similarity scores) and dimensionality reduction visualization at the same time, why do researchers usually only use UMAP for visualization?
That's a great question. I guess the big difference between UMAP and a clustering algorithm is that usually a clustering algorithm gives you a metric to determine how good or bad the clustering is. For example, with k-means clustering, we can compare the total variation in the data for each value for 'k'. In contrast, I'm not sure we can do that with UMAP.
Thank you! Could you do one with self organizing maps?
I'll keep that in mind.
The big picture is ❤️
😃
You got it! BAM! :)
Love you
Thank you!
i just love you
Thanks!
can you cover Locality Sensitive Hashing, and do a clustering implementation in PySpark
I'll keep that in mind.
echoing UMAP part is amazing 😂
Thanks! :)
Great quest, Josh! First time I noticed the fuzzy parts on the circles and arrows. What tool are you using to make the slides? Looks damn fine!
Thanks! I draw everything in keynote.
Can you make some videos on recommender systems??
complete list for recommender systems
ruclips.net/p/PLsugXK9b1w1nlDH0rbxIufJLeC3MsbRaa
I hope too soon!
Complicated dataset you referring to is the dataset that cannot be explained by one or two PC?
yep
Have you considered comparing UMAP and Concordex? :)
Not yet.
But we can we seperate those clusters? We need cluster centroids for that.
UMAP isn't a clustering method, it's a dimension reduction method. If you want to find clusters, try DBSCAN: ruclips.net/video/RDZUdRSDOok/видео.html
how did the low dimensional plot came just after the similarity score?
At 4:14 I talk about how the main idea is that we start with an initial (somewhat random) low dimensional plot that we then optimize based on the high dimensional similarity scores.
StatQuest please do a UMAP tutorial in R next!
I'll keep that in mind. However, I'm doing the mathematical details next.
Hello sir, would you cover a dimension reduction technique which uses hierarchical or k-means clustering if possible?
Thanks in advance.
I'll keep that in mind.
When you explain UMAP in terms of preserving clusters, it makes it sound like UMAP is performing a cluster analysis under the hood. Is my understanding correct, when I interpret your use of clusters in the video as a didactic "trick" rather than UMAP actually doing cluster analysis? (Because otherwise, why would we use UMAP to reduce dimensions before doing a cluster analysis, using HDBSCAN or whatever)?
One of the most important parameters you can set for UMAP is the number of high-dimensional neighbors you want each point to have (see 7:15 ). So, in that sense, you control how high-dimensional clusters are identified even though there is no explicit clustering algorithm involved.
@@statquest I suppose the difference between UMAP's high-dimensional neigbours and clusters (as commonly understood) is that the high-dimensional neighbours are "ego-centric clusters" (if that makes any sense), i.e. each point has it's own "cluster" of nearest neigbours. Or am I misunderstanding things when I assume that if we set num.neighbors to 4 instead of 3, E would (or could) become part of C's "neigborhood cluster", even though E clearly belongs to a different cluster (properly understood) than C?
🤔
@@critical-chris Yep.
@@statquest thanks for confirming. This helps me wrap my head around UMAP. Next thing will be to figure out what that ”magic” curve is and how it changes based on the number of neighbors you select. I suppose I’ll find that in the mathematical details video… :-)
@@critical-chris yep! :)
I might have missed this, but how does UMAP initializes a low-dimensional graph? Is it randomized as done in tSNE?
This is answered at 16:43
As PCA required correlation between features to find new principal components, does UMAP approach require correlation between features to project data onto lower dimensional space?
no
@@statquest So we can still see clusters even when data is not correlated?
@@ammararazzaq132 That I don't know. All I know is that UMAP does not assume correlations.
@@statquest Okay thankyou. I will look into it a bit more.
How does UMAP identify these initial clusters to begin with?
You specify the number of neighbors. I talk about this at various times, but 17:18 would be a good review.
I have a question. After moving d closer to e, do we still consider moving d to c? Or, would c be moved to d? The direction in the video confuses me.
When we move 'd', we consider both 'e' and 'c' at the same time. In this case, moving 'd' closer to 'e' and closer to 'c' will increase the neighbor score for 'e' a lot but only increase the score for 'c' a little, so we will move 'd'. For details, see: ruclips.net/video/jth4kEvJ3P8/видео.html
13:27 how do you derive t distribution fit
That question, and other details, are answered in the "details" video: ruclips.net/video/jth4kEvJ3P8/видео.html
How does umap know which high dimensional datapoint belongs to which cluster?
The similarity scores.
I've read that UMAP is better at preserving inter-cluster distance information relative to tSNE, what do you think? Is it reasonable to infer relationships between clusters on a UMAP graph? I try to avoid doing so with tSNE.
To be honest, it probably depends on how you configure the n_neighbors parameter. However, to get a better sense of the differences (and similarities) between UMAP and t-SNE, see the follow up video: ruclips.net/video/jth4kEvJ3P8/видео.html
Concerning distance information, initialization and parameters are important. Read "The art of using t-SNE for single-cell transcriptomics" pubmed.ncbi.nlm.nih.gov/31780648/ and "Initialization is critical for preserving global data structure in both t-SNE and UMAP" dkobak.github.io/pdfs/kobak2021initialization.pdf
Seems very convoluted compared to K-means or hclust.
UMAP uses a weighted clustering method, so that points that are closer together in high-dimensional space will get higher priority to be put close together in the low dimensional space.
But how do you "decide" that a cluster is a distant cluster?
PS: I guess you consider a point as a distant point if it's not among the k neighbors.
correct
@@statquest But do you keep "adding" new points to the cluster if they are within the k neighbors of the next point, and so on?
Or in order to define the cluster you only consider the k neighbors of the first point?
@@juanete69 We start with a single point. If it has k neighbors, we call it a cluster and the neighbors to the cluster. Then, for each neighbor that has k neighbors, we add those neighbors and repeat until the cluster is surrounded by points that have fewer than k neighbors.
Hi-dimentional BAAAMM!
I love it! BAM! :)
Permissionntomlearn sir
:)
auto-like 👍
bam!
i think he will sing all the video XD
:)
Thanks, I appreciate the information. However, I think your videos would be easier to watch with a reduction of the "bam" dimension.
Noted!
UMAP is a MESS. No thank you.
noted
Don't say bam....!! It irritates
noted
@@dummybro499 Double Bam!!!
@@statquestI like it though
Yes please stop 😂 Thanks for the video it was clear!
Say bam and don’t listen to haters