You are, and I cannot stress this enough, a national treasure!! The ease in how you explain things that have eluded me for over a decade and make it click is truly a gift. Thank you so freaking much!!!
I used to watch your videos while I was a student. It’s been 3 years since my graduation and I’m still here (I’m changing jobs and need to review some stuff). Thank you a lot for your incredible work
your videos help me see the "big picture" of concepts. after your videos, I can actually understand what is going on and why we are doing something. Thank you!
@@gummybear8883 or you've been a professor for 20 years and are so deep into a topic that you completely forgot how people approach new problems. Your sentence really only applies to novices trying to be teachers.
@@MungoBootyGoon Yeah, that is absolutely true. Many of my professors for theoretical computer science are experts on various fields but man do their explanations suck. That's why I have to watch youtube videos for stuff like this.
I already watched some of your videos. This one I watched because I want to apply hierarchical clustering in my thesis. It is about time I buy one of your sweaters. I hope this supports you. Thanks for all the truly great explanations.THANK YOU!
The visualizations and simplicity of explanations as well as great examples motivate me to keep learning. Thank you so much for making it so interesting. I'll try to do my bit by buying a t-shirt. 😊
Very nice. I use this in Python and it's a really good way to cluster. Another thing - from coding aspect, it's only 1 line of code in Seaborn, very easy.
I would like to add that: - single-linkage (comparing the closest points of 2 clusters) tends to form more elliptic clusters; - complete-linkage tends to form more globular clusters. So, that means that not scaling your data, scaling with a StandardScaler, or with a MinMaxScaler will affect your clustering.
You are simply amazing !! I love your style and simplicity and the word is BAM! .. your videos are very informative and worth going through... thanks for all your hard work in simplifying the complex topics
This channels is truly a treasure trove! I was wondering if you could do a video on consensus clustering? I.e. how to evaluate clustering across multiple models and parameters. You are awesome!
Dear StatQuest! Thank you for the explanation. 1. What is the best would you would evaluate the algorithm (silluete score,...) to decide which clustering method and distance to use ( i undestand that silluete score is good to choose the number of k but not to decide between algorithms)? To decide the best algorithm i have been ploting PCA and color label by clusters created this way understanding if the clusters make sense or not? (however it is known by literature that PCA does not work well to evaluate binary data) 2. In the case that the data is binary, (e.g instead of expression data, genomic alteration data) what kind of distance would you use? Best Regards, Manuel
1) I guess it depends. If I had "training" data, with known categories, I would compare how many times the data were correctly and incorrectly grouped. Otherwise, it really just boils down to subjective preference. 2) If you measure a lot of things, the euclidian distance will still work in this situation.
Hi Josh, amazing video as always. Think you can come up with video on how to determine the best number of clusters to have? I get the Elbow method, but I really struggle with the inconsistent method. I was looking at the inconsistency coefficients, and I am confused to do they include singleton clusters, or are singleton clusters excluded. I am also confused about what exactly is the "jump" in the inconsistent coefficient that we are supposed to look out for.
Hi just a question. At 7:16, if I'm not mistaken, then gene 1 and 2 are analogous to variable 1 and 2( aka x & y in 2-dimension dataset). So shouldn't the distance be sqrt( (x1-x2)^2 + (y1-y2)^2 ) or sqrt( (1.6-0.5)^2 + (-0.5+1.9)^2 ) ? Sorry if it may seem a stupid question, but since I'm not that good at maths in general I need to turn everything into the basics to understand. Thank you
In this example we are trying to find how how similar (or different) Gene 1 is to (or from) Gene 2 across all samples, so we are comparing the distances between Gene 1 and Gene 2 in both samples. In other words, if both genes have similar values in Sample #1 and similar values in Sample #2, then we will consider both genes to be similar. In contrast, if the values for Gene 1 and 2 are different from each other in Sample #1 and different from each other in Sample #2, then we will consider the genes to be very different from each other. Thus, we are looking at the difference in gene within each sample. In contrast, you are asking to look at the sample differences within each gene. This would tell us that Sample #1 and Sample #2 are similar or not, and, in this example, we are not interested in that. Does that make sense?
@StatQuest please explain probability and Naive Bayes. Thanks in advance! I am a huge fan of your way of teaching and your small songs creations. Keep up the good work!
At 7:28, we calculated the number 3.2 being the difference between gene 1 and gene 2. But the whole purpose of calculating is to figure out which gene is the most similar with gene 1 (for example). Now my question: After we compute the values between [gene 1 and gene 2], [gene 1 and gene 3] and [gene 1 and gene 4], we select the gene with the SMALLEST VALUE as the most similar gene to gene 1 ? Or the BIGGEST VALUE ? I think the smallest, but just to be sure..
Josh, how do we figure out the colors in the first place? @8:47..Say we measure the genes. Red denotes value from 0.8-1, blue denotes values from 0.1-0.2. Am I right?
The coloring is actually arbitrary. Usually we like to have a gradient from the maximum value to the minimum value, but there is no rule that says we should only use 2 colors. We could use 3 or more. The idea is simply to create an image that is informative and useful.
Hello, I am Angelos Didachos and I have a question for StatQuest. 9:54 Is the way of comparing the point to the cluster same as before? That is, Manhattan distance, Euklidian distance etc ?
Great explanation Josh! Just one question, are we clustering samples(data points) or the Genes(features)? If we are clustering Genese, does not it mean that we are just clustering the correlated features?
In this video we are clustering the genes, and yes, the idea is that correlated features are brought together. We could even just calculate the correlation coefficient for each pair and cluster based on those values.
@@statquest Thanks for your reply. But I have seen some other blogs where authors are plotting 2D data points and using hierarchical clustering. So in real life we use hierarchical clustering for data clustering or feature clustering?
@@emamulmursalin9181 I'm not sure what you mean by "data" clustering, however, we can cluster the rows or the columns with similar ease. It doesn't matter if one is features and the other is samples.
@@statquest Sorry for using an unclear term. Actually I meant "samples" by using the term "data". So, can hierarchical clustering be used for "feature clustering" (for example, finding correlated features and remove the redundant features) and also as "sample clustering" (e.g. just like K means clustering ) ?
Hey Josh, what is the difference between PCA and Hierarchical Clustering? Could you give me an example for each one? I know some people say " PCA groups variables " and "HC groups obsvervations". I think the output from each one represent that exaplanation. But it seems we could use both techniques to answer the same question...
Although both methods can be applied to the exact same problem (and frequently are both applied to the same problem), they have different strengths. PCA, for example, has loading scores, which would tell us how much each individual variable contributes to the clustering. In contrast, hierarchical clustering gives us a nice heatmap style graph that makes it easy to see the big picture in how and why things are similar and different. I say "try them both."
Great video! There is a minor mistake around 8:21. You should replace "sample" by "gene" in calculating the distance, i.e., |difference in gene #1| + |difference in gene #2| +...
The CPA and clustering question was worth 30% of total marks on my exam today, and I managed to write them so well only because of your videos. you're a savior. Thank you!!
Great channel! Clear explanations. In HCA, could you not follow up the clustering of rows (genes) by clustering the columns (samples)? Is this automatically done? Does not seem like the best heatmap would be produced if you just cluster/shuffle rows. Would have to cluster/shuffle columns, too, right? Also, must/should the data be standardized first?
Hey Josh! Your videos are great! Thank you for the effort you've put on it! If you allow me... have you considered making videos explaining DBSCAN and HDBSCAN?
This video is super duper bam bam double double bam! Will you cover more advanced clustering techniques such as model-based clustering (MCLUST) and weighted gene co-expression network analysis (WGCNA)? I'm learning about these things now for my research, and will be very grateful if you can cover these topics for me. Thanks! :)
Hello Josh! The videos are soooooooo goooood! These are BAMMMMM Good!! 1 request - Could you please create a video on LCA - Latent Class Analysis? Maybe by comparing it to k-means clustering? I cannot be more thankful!
Hi Josh, I am really enjoying your videos specially the wha whas and bam !! , you make stats sound easy but also fun! Thank you! I wonder if you could please do a video to explain the different uses of PCA and HCA, when do you use one or the other? In the mean time I will watch your videos on PCA and HCA :) hooray!
@StatQuest with Josh Starmer, in this video you are clustering and combining genes (the attributes of data), aren't you supposed to cluster and combine the samples? that's the inverse of the approach shown
You can cluster the samples or the genes, or both! It all depends on the question you are asking. For example, if I have some healthy people and some sick people, I might be interested in clustering the people (to see if healthy people form one cluster and unhealthy people form another) or I might be interested in clustering the genes. In this case I would find out which genes are correlated and up-regulated in healthy people compared to unhealthy people. Or I could do both. Does that make sense?
Thank you very much!🥰 You saved my life 🥲 I have a question, if my dataset has continuous variables (ex. income) and a discrete variable (ex. number of children in the household). How can I measure the distance between them? Thank you!!!
You can use one-hot-encoding ruclips.net/video/589nCGeWG1w/видео.html or you can use a random forest to do the clustering ruclips.net/video/sQ870aTKqiM/видео.html
Hi may I know how about the clustering of the column? Is it possible to cluster the column and row at the same time? Correct me if I am wrong, clustering the row meaning to group genes that have similar expression together from different sample ? Clustering column meaning to group the samples with similar gene expression?
Hey, Josh, is there any video in which you address unsupervised and supervised hierarchical clustering of gene and lincRNA expressions? If not, could you do a video about that or provide me with some links to read about? I can't find any. Thanks.
@@statquest oh, yeah, thanks. I just did some readings about unsupervised and got more info. I’ll keep searching for supervised clustering. Thanks a lot! Great video.
Sometimes you know from how the data are generated (are you comparing commute times in manhattan? then use the manhattan distance) but usually you have to run both.
Hi Josh! Can you please make a video on DBSCAN, if possible? Especially the parameter tuning part of it, I'm sure that would be of great help to lots of people.
How to validate these clustering techniques? I mean for a given dataset, let’s assume I have tried various hierarchical clustering techniques like single linkage, complete linkage, etc using various distance matrix for each method. How to pick the right one from all these different clusters which has been formed for that particular dataset
This is going to sound very disappointing, but since these methods are generally used to explore data and extract new insights from it, then you pick the method that gives you the most insight. So try them and see if one makes more sense than the others.
Hello Josh. If I have several categorical variables like "yes" or "no", which cluster do I have to use? Or is the cluster method not the best for categorical variables, if not, why? Thanks you!
The trick is that you need a distance metric that works with categorical variables. The standard distance, the euclidian distance, is not very good for categorical variables.
@@alexiasantos5526 There's something called a Gower Similarity coefficient that might work with your data. See: stats.stackexchange.com/questions/15287/hierarchical-clustering-with-mixed-type-data-what-distance-similarity-to-use/15313#15313
So, in each step, we have an upper triangle matrix of distances between current clusters and only combine the closest two and then repeat the process!? Silly in terms of computations!
Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/
You're a person who saved me lots of time and pain. Thank you. I wish you the best
Thank you very much! :)
You are, and I cannot stress this enough, a national treasure!! The ease in how you explain things that have eluded me for over a decade and make it click is truly a gift. Thank you so freaking much!!!
Wow, thank you!
The intro song removed my fear of clustering. Thanks for the awesome video.
going on a statequest😌
@@nemothekitten3994 aww...
I used to watch your videos while I was a student. It’s been 3 years since my graduation and I’m still here (I’m changing jobs and need to review some stuff).
Thank you a lot for your incredible work
Congratulations on the new job! BAM! :)
I still don't believe how this content is free. Thank you sir!
Thanks!
your videos help me see the "big picture" of concepts. after your videos, I can actually understand what is going on and why we are doing something. Thank you!
Happy to help!
Even after 7 years, you still the saver
Glad I could help!
I can't thank you enough. Such clear and helpful explanations. Great.
Thanks! :)
you can, with patreon
Love your videos. The fact that you make it so simple shows the depth of your understanding.
Thank you!
this video proved that "hard" stuff =badly explained stuff
so fuckin true. Not sorry for swearing. Happy learning guys
if you can't explain something in simple terms, then you don't understand it that well.
@@gummybear8883 or you've been a professor for 20 years and are so deep into a topic that you completely forgot how people approach new problems. Your sentence really only applies to novices trying to be teachers.
@@julius4858 We could just change it to: if you can't explain something in simple terms, then you can't teach it that well.
@@MungoBootyGoon Yeah, that is absolutely true. Many of my professors for theoretical computer science are experts on various fields but man do their explanations suck. That's why I have to watch youtube videos for stuff like this.
I already watched some of your videos. This one I watched because I want to apply hierarchical clustering in my thesis. It is about time I buy one of your sweaters. I hope this supports you. Thanks for all the truly great explanations.THANK YOU!
Thank you very much!!! :)
This channel is a treasure! Absolutely incredible job my man
Thank you so much 😀!
Thank you for clearly explaining the details at a moderate speed! You save me lots of time!
Thank you!
The visualizations and simplicity of explanations as well as great examples motivate me to keep learning. Thank you so much for making it so interesting. I'll try to do my bit by buying a t-shirt. 😊
Wow! Thank you very much! :)
hi pragya
StatQuest is the Best! Teaching is an art...and these are master pieces.
WOW! Thank you very much! :)
I have to congratulate you for this video, it gives the basic notions of the hierarchical cluster easy and fast. Bravo!
my teacher keeps flying to new york and doesn't teach us crap about this so thank you for this pookie
Thanks!
StatQuest never disappoints
BAM! :)
Very nice.
I use this in Python and it's a really good way to cluster.
Another thing - from coding aspect, it's only 1 line of code in Seaborn, very easy.
Thanks for sharing!
I would like to add that:
- single-linkage (comparing the closest points of 2 clusters) tends to form more elliptic clusters;
- complete-linkage tends to form more globular clusters.
So, that means that not scaling your data, scaling with a StandardScaler, or with a MinMaxScaler will affect your clustering.
Noted!
Thank you. Better than university teaching
Thanks!
I am super grateful for this video. You are such an excellent teacher! Thank you for being such a "you"
Wow, thank you!
Your explanation is very clear to me and i see all your video, you are very friendly to me. I like you very much.
Thank you! 😃
You are simply amazing !! I love your style and simplicity and the word is BAM! .. your videos are very informative and worth going through... thanks for all your hard work in simplifying the complex topics
Thank you so much!!
I am preparing my actuarial exam and you saved me a lot❤
Good luck! :)
you saved yet another day Josh. Thank you
Bam! :)
This channels is truly a treasure trove! I was wondering if you could do a video on consensus clustering? I.e. how to evaluate clustering across multiple models and parameters. You are awesome!
I'll keep that in mind.
ohh my god thanks josh u are so brilliant i think marvel should add another new superhero "josh starmer the life saver"
:)
Thank you for allowing me to ascend the stats hierarchy!
bam! :)
Joshua's video is always helpful. Next time, probably k-means clustering.
The best as always! Love this channel! It's super easy to understand
Thanks!
The opening is always funny
:)
Dear StatQuest! Thank you for the explanation.
1. What is the best would you would evaluate the algorithm (silluete score,...) to decide which clustering method and distance to use ( i undestand that silluete score is good to choose the number of k but not to decide between algorithms)?
To decide the best algorithm i have been ploting PCA and color label by clusters created this way understanding if the clusters make sense or not? (however it is known by literature that PCA does not work well to evaluate binary data)
2. In the case that the data is binary, (e.g instead of expression data, genomic alteration data) what kind of distance would you use?
Best Regards, Manuel
1) I guess it depends. If I had "training" data, with known categories, I would compare how many times the data were correctly and incorrectly grouped. Otherwise, it really just boils down to subjective preference.
2) If you measure a lot of things, the euclidian distance will still work in this situation.
10:08 do you have any videos that talk about clustering in R?
Thankyou for all your explanations btw!!
Unfortunately, no. :(
Hi Josh, amazing video as always. Think you can come up with video on how to determine the best number of clusters to have? I get the Elbow method, but I really struggle with the inconsistent method. I was looking at the inconsistency coefficients, and I am confused to do they include singleton clusters, or are singleton clusters excluded. I am also confused about what exactly is the "jump" in the inconsistent coefficient that we are supposed to look out for.
I'll keep that topic in mind.
Hi just a question. At 7:16, if I'm not mistaken, then gene 1 and 2 are analogous to variable 1 and 2( aka x & y in 2-dimension dataset). So shouldn't the distance be sqrt( (x1-x2)^2 + (y1-y2)^2 ) or sqrt( (1.6-0.5)^2 + (-0.5+1.9)^2 ) ? Sorry if it may seem a stupid question, but since I'm not that good at maths in general I need to turn everything into the basics to understand. Thank you
In this example we are trying to find how how similar (or different) Gene 1 is to (or from) Gene 2 across all samples, so we are comparing the distances between Gene 1 and Gene 2 in both samples. In other words, if both genes have similar values in Sample #1 and similar values in Sample #2, then we will consider both genes to be similar. In contrast, if the values for Gene 1 and 2 are different from each other in Sample #1 and different from each other in Sample #2, then we will consider the genes to be very different from each other. Thus, we are looking at the difference in gene within each sample.
In contrast, you are asking to look at the sample differences within each gene. This would tell us that Sample #1 and Sample #2 are similar or not, and, in this example, we are not interested in that. Does that make sense?
@@statquest I kinda get it. Thank you.
Watching this after watching your more recent videos. Missed your 'BAM's a lot!!! You should remake these old videos again! Thanks :)
bam! :)
@@statquest 😍
Found this gem of a channel today. Agreed on the fun rhymes and puns.
Absolutely brilliant..Thank you sooo much for your time and effort!
Thanks! :)
You saved my life😇 Thank you very much.
And I think the link for the sample code in R isn't available right now...
Yep, that's a really old link. Here's a new one: statquest.org/statquest-hierarchical-clustering/
@StatQuest please explain probability and Naive Bayes. Thanks in advance! I am a huge fan of your way of teaching and your small songs creations. Keep up the good work!
Thanks! Naive Bayes is on the to-do list.
@@statquest waiting. plz .
At 7:28, we calculated the number 3.2 being the difference between gene 1 and gene 2. But the whole purpose of calculating is to figure out which gene is the most similar with gene 1 (for example).
Now my question: After we compute the values between [gene 1 and gene 2], [gene 1 and gene 3] and [gene 1 and gene 4], we select the gene with the SMALLEST VALUE as the most similar gene to gene 1 ? Or the BIGGEST VALUE ? I think the smallest, but just to be sure..
In this case we want the smallest distance, which means the most similar.
awesome content and delivery
Glad you think so!
it doesn't define if it's must be from the shortest Euclidean or what and basically what makes the dendogram become shorter from another
I'm not sure I understand your comment. Can you clarify?
THANK YOU! This is has been SO HELPFUL!
bam!
Josh, how do we figure out the colors in the first place? @8:47..Say we measure the genes. Red denotes value from 0.8-1, blue denotes values from 0.1-0.2. Am I right?
The coloring is actually arbitrary. Usually we like to have a gradient from the maximum value to the minimum value, but there is no rule that says we should only use 2 colors. We could use 3 or more. The idea is simply to create an image that is informative and useful.
@@statquest Thanks Josh!!
Hello, I am Angelos Didachos and I have a question for StatQuest. 9:54 Is the way of comparing the point to the cluster same as before? That is, Manhattan distance, Euklidian distance etc ?
Yes.
@@statquest Hurray!
Excellent explanation!
Thanks!
This is Awesome......
Please Make a session on K Modes, KNN and K Prototypes
Here's a complete list of my videos so far: statquest.org/video-index/
the intro.......nice one bro🖐
bam! :)
Great explanation Josh! Just one question, are we clustering samples(data points) or the Genes(features)? If we are clustering Genese, does not it mean that we are just clustering the correlated features?
In this video we are clustering the genes, and yes, the idea is that correlated features are brought together. We could even just calculate the correlation coefficient for each pair and cluster based on those values.
@@statquest Thanks for your reply.
But I have seen some other blogs where authors are plotting 2D data points and using hierarchical clustering. So in real life we use hierarchical clustering for data clustering or feature clustering?
@@emamulmursalin9181 I'm not sure what you mean by "data" clustering, however, we can cluster the rows or the columns with similar ease. It doesn't matter if one is features and the other is samples.
@@statquest Sorry for using an unclear term. Actually I meant "samples" by using the term "data".
So, can hierarchical clustering be used for "feature clustering" (for example, finding correlated features and remove the redundant features) and also as "sample clustering" (e.g. just like K means clustering ) ?
@@emamulmursalin9181 Yes. We can cluster the rows just as easily as we cluster the columns.
I love this channel so much
Thank you! :)
great videos, I like the way you explain these topics
4.4 K likes, zero dislikes! You're awesome. Thanks very much
bam!
Great channel. Clearly explained all most all the topics i watched on ML. Here one question what does gene stands for is it features of the data ?
Yes, it's a feature.
Hey Josh, what is the difference between PCA and Hierarchical Clustering? Could you give me an example for each one? I know some people say " PCA groups variables " and "HC groups obsvervations". I think the output from each one represent that exaplanation. But it seems we could use both techniques to answer the same question...
Although both methods can be applied to the exact same problem (and frequently are both applied to the same problem), they have different strengths. PCA, for example, has loading scores, which would tell us how much each individual variable contributes to the clustering. In contrast, hierarchical clustering gives us a nice heatmap style graph that makes it easy to see the big picture in how and why things are similar and different. I say "try them both."
@@statquest BAMM! I got it. Thank you Sir!
Great video! There is a minor mistake around 8:21. You should replace "sample" by "gene" in calculating the distance, i.e., |difference in gene #1| + |difference in gene #2| +...
I believe the video is correct. For details, see: 6:01
thanks for amazing lessons
Explained in a simple manner.
9:44 average clusters is mean linkage; centroid is centroid of a cluster?
I'm not sure I understand your question.
Please add a video on Latin Square design, Joshua!
I am going to pass my stats final tomorrow, only because of your videos :D
your students are lucky.
The CPA and clustering question was worth 30% of total marks on my exam today, and I managed to write them so well only because of your videos. you're a savior. Thank you!!
Great channel! Clear explanations. In HCA, could you not follow up the clustering of rows (genes) by clustering the columns (samples)? Is this automatically done? Does not seem like the best heatmap would be produced if you just cluster/shuffle rows. Would have to cluster/shuffle columns, too, right? Also, must/should the data be standardized first?
You can cluster both columns and rows. And sometimes standardizing helps, sometimes it doesn't. It's worth trying both options.
@@statquest Thanks!
You saved me a week
Awesome! :)
Congratulations from Brazil!
Hey Josh! Your videos are great! Thank you for the effort you've put on it!
If you allow me... have you considered making videos explaining DBSCAN and HDBSCAN?
Yes, I've thought about those topics and may make a video about them.
This video is super duper bam bam double double bam!
Will you cover more advanced clustering techniques such as model-based clustering (MCLUST) and weighted gene co-expression network analysis (WGCNA)? I'm learning about these things now for my research, and will be very grateful if you can cover these topics for me. Thanks! :)
Thanks! :)
Hello Josh! The videos are soooooooo goooood! These are BAMMMMM Good!!
1 request - Could you please create a video on LCA - Latent Class Analysis? Maybe by comparing it to k-means clustering? I cannot be more thankful!
would like this too
I love this. Your video is wonderful!
Thank you! :)
I LOVE YOU JOSH !
:)
Thank you so much sir! This is very helpful and very informative.
Glad it was helpful!
Congratulations! your video is so great! you explain is a very clear and simple way.
Thank you! 😃
The part that I don’t understand is the top blue and orange, did you apply hierarchical classification on the genes and on the samples ?
What time point in the video, minutes and seconds, are you asking about?
fantastic explanation, thank you so much for this video.
Thanks!
Hi Josh, I am really enjoying your videos specially the wha whas and bam !! , you make stats sound easy but also fun! Thank you! I wonder if you could please do a video to explain the different uses of PCA and HCA, when do you use one or the other? In the mean time I will watch your videos on PCA and HCA :) hooray!
BAM! Thank you very much! I'll keep that topic in mind.
Man your videos are soo super helpful! THANK YOU (ps consider the color library viridis to make it easier for the colorblind)
Thanks!
Nice explanation 👍👍
Thanks!
Great as always! Thanks.
Thank you! :)
@StatQuest with Josh Starmer, in this video you are clustering and combining genes (the attributes of data), aren't you supposed to cluster and combine the samples? that's the inverse of the approach shown
You can cluster the samples or the genes, or both! It all depends on the question you are asking. For example, if I have some healthy people and some sick people, I might be interested in clustering the people (to see if healthy people form one cluster and unhealthy people form another) or I might be interested in clustering the genes. In this case I would find out which genes are correlated and up-regulated in healthy people compared to unhealthy people. Or I could do both. Does that make sense?
Thank You Sir, It was awesome to learn from you.
BAM! :)
Good job mr josh.
Thank you!
Excellent Sir
Thanks!
Thank you very much!🥰 You saved my life 🥲
I have a question, if my dataset has continuous variables (ex. income) and a discrete variable (ex. number of children in the household). How can I measure the distance between them? Thank you!!!
You can use one-hot-encoding ruclips.net/video/589nCGeWG1w/видео.html or you can use a random forest to do the clustering ruclips.net/video/sQ870aTKqiM/видео.html
@@statquest Thanks again! I think I will learn a lot if I subscribe to this channel 🥰🥰
Great explanation. Thanks StatQuest!
Hi may I know how about the clustering of the column? Is it possible to cluster the column and row at the same time? Correct me if I am wrong, clustering the row meaning to group genes that have similar expression together from different sample ? Clustering column meaning to group the samples with similar gene expression?
Sure! You can cluster both the rows and columns at the same time.
Thank you very much for this video! It was really well done :)
Glad you liked it!
You can explain the same concept with may be some other datasets and better visualisation other than heatmap
Hey, Josh, is there any video in which you address unsupervised and supervised hierarchical clustering of gene and lincRNA expressions? If not, could you do a video about that or provide me with some links to read about? I can't find any. Thanks.
This video is unsupervised hierarchical clustering.
@@statquest oh, yeah, thanks. I just did some readings about unsupervised and got more info. I’ll keep searching for supervised clustering. Thanks a lot! Great video.
Thank you for all your videos clearly explaining complex concepts. Can you also make video(s) on different bi-clustering methods?
I'll keep that in mind.
How do you tell whether using Euclidean or Manhattan Distance would be more insightful without having to run both?
Sometimes you know from how the data are generated (are you comparing commute times in manhattan? then use the manhattan distance) but usually you have to run both.
Hi, Joshua. Do you know the basics of pseudotime analysis in single-cell RNA-seq. Can you make a short video talking about the basics? Thanks!
I'll put that on the to-do list!
really awesome video! This will help me with my test. Thank you!
Great video!
Do you have any plans to talk about co-clustering, look forward to it.
Hi Josh! Can you please make a video on DBSCAN, if possible? Especially the parameter tuning part of it, I'm sure that would be of great help to lots of people.
I'll keep that in mind.
How to validate these clustering techniques? I mean for a given dataset, let’s assume I have tried various hierarchical clustering techniques like single linkage, complete linkage, etc using various distance matrix for each method. How to pick the right one from all these different clusters which has been formed for that particular dataset
This is going to sound very disappointing, but since these methods are generally used to explore data and extract new insights from it, then you pick the method that gives you the most insight. So try them and see if one makes more sense than the others.
Hello Josh. If I have several categorical variables like "yes" or "no", which cluster do I have to use? Or is the cluster method not the best for categorical variables, if not, why? Thanks you!
The trick is that you need a distance metric that works with categorical variables. The standard distance, the euclidian distance, is not very good for categorical variables.
@@statquest Thanks for answer! What would be the distance metric for categorical variable?
@@alexiasantos5526 There's something called a Gower Similarity coefficient that might work with your data. See: stats.stackexchange.com/questions/15287/hierarchical-clustering-with-mixed-type-data-what-distance-similarity-to-use/15313#15313
Amazing! Your Videos are so much comrehensible. I really enjoy watching!!!*_*
Thank you!
hi, can you please give a tutorial about the Mahalanobis/statistical distance logic and calculation?
Thanks,
- Dean
I'll keep that in mind.
just beautiful!
how can clustering be applied on spectral data?
So, in each step, we have an upper triangle matrix of distances between current clusters and only combine the closest two and then repeat the process!? Silly in terms of computations!
noted