StatQuest: Hierarchical Clustering

Поделиться
HTML-код
  • Опубликовано: 18 дек 2024

Комментарии •

  • @statquest
    @statquest  2 года назад +11

    Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/

  • @Aemilindore
    @Aemilindore 3 года назад +155

    You're a person who saved me lots of time and pain. Thank you. I wish you the best

    • @statquest
      @statquest  3 года назад +4

      Thank you very much! :)

  • @kristinomalley4519
    @kristinomalley4519 Год назад +25

    You are, and I cannot stress this enough, a national treasure!! The ease in how you explain things that have eluded me for over a decade and make it click is truly a gift. Thank you so freaking much!!!

  • @anamulmbdu
    @anamulmbdu 6 лет назад +197

    The intro song removed my fear of clustering. Thanks for the awesome video.

    • @nemothekitten3994
      @nemothekitten3994 2 года назад +3

      going on a statequest😌

    • @w花b
      @w花b 2 месяца назад

      ​@@nemothekitten3994 aww...

  • @julieboissiere4553
    @julieboissiere4553 2 года назад +17

    I used to watch your videos while I was a student. It’s been 3 years since my graduation and I’m still here (I’m changing jobs and need to review some stuff).
    Thank you a lot for your incredible work

    • @statquest
      @statquest  2 года назад +8

      Congratulations on the new job! BAM! :)

  • @fadikhattar290
    @fadikhattar290 2 года назад +6

    I still don't believe how this content is free. Thank you sir!

  • @yamikag8363
    @yamikag8363 2 года назад +4

    your videos help me see the "big picture" of concepts. after your videos, I can actually understand what is going on and why we are doing something. Thank you!

  • @KL1_Khaled
    @KL1_Khaled 3 месяца назад +3

    Even after 7 years, you still the saver

    • @statquest
      @statquest  3 месяца назад +2

      Glad I could help!

  • @rajshrestha9484
    @rajshrestha9484 5 лет назад +56

    I can't thank you enough. Such clear and helpful explanations. Great.

  • @stephenwood9252
    @stephenwood9252 2 года назад +4

    Love your videos. The fact that you make it so simple shows the depth of your understanding.

  • @brunomartel4639
    @brunomartel4639 4 года назад +125

    this video proved that "hard" stuff =badly explained stuff

    • @sindhujas7807
      @sindhujas7807 4 года назад +1

      so fuckin true. Not sorry for swearing. Happy learning guys

    • @gummybear8883
      @gummybear8883 3 года назад +4

      if you can't explain something in simple terms, then you don't understand it that well.

    • @julius4858
      @julius4858 3 года назад +7

      @@gummybear8883 or you've been a professor for 20 years and are so deep into a topic that you completely forgot how people approach new problems. Your sentence really only applies to novices trying to be teachers.

    • @MungoBootyGoon
      @MungoBootyGoon 3 года назад +6

      @@julius4858 We could just change it to: if you can't explain something in simple terms, then you can't teach it that well.

    • @julius4858
      @julius4858 3 года назад

      @@MungoBootyGoon Yeah, that is absolutely true. Many of my professors for theoretical computer science are experts on various fields but man do their explanations suck. That's why I have to watch youtube videos for stuff like this.

  • @chikken007
    @chikken007 4 года назад +4

    I already watched some of your videos. This one I watched because I want to apply hierarchical clustering in my thesis. It is about time I buy one of your sweaters. I hope this supports you. Thanks for all the truly great explanations.THANK YOU!

    • @statquest
      @statquest  4 года назад +1

      Thank you very much!!! :)

  • @scraps7624
    @scraps7624 2 года назад +7

    This channel is a treasure! Absolutely incredible job my man

    • @statquest
      @statquest  2 года назад +1

      Thank you so much 😀!

  • @jingsilu5568
    @jingsilu5568 2 года назад +1

    Thank you for clearly explaining the details at a moderate speed! You save me lots of time!

  • @pragyamishra9083
    @pragyamishra9083 3 года назад +5

    The visualizations and simplicity of explanations as well as great examples motivate me to keep learning. Thank you so much for making it so interesting. I'll try to do my bit by buying a t-shirt. 😊

  • @websciencenl7994
    @websciencenl7994 2 года назад +1

    StatQuest is the Best! Teaching is an art...and these are master pieces.

    • @statquest
      @statquest  2 года назад

      WOW! Thank you very much! :)

  • @davidescobar4449
    @davidescobar4449 5 лет назад +3

    I have to congratulate you for this video, it gives the basic notions of the hierarchical cluster easy and fast. Bravo!

  • @patolizac23
    @patolizac23 3 дня назад +1

    my teacher keeps flying to new york and doesn't teach us crap about this so thank you for this pookie

  • @HiasHiasHias
    @HiasHiasHias 5 месяцев назад +2

    StatQuest never disappoints

  • @liranzaidman1610
    @liranzaidman1610 4 года назад +19

    Very nice.
    I use this in Python and it's a really good way to cluster.
    Another thing - from coding aspect, it's only 1 line of code in Seaborn, very easy.

    • @statquest
      @statquest  4 года назад +1

      Thanks for sharing!

  • @loftyTHEOWNER
    @loftyTHEOWNER 2 года назад +2

    I would like to add that:
    - single-linkage (comparing the closest points of 2 clusters) tends to form more elliptic clusters;
    - complete-linkage tends to form more globular clusters.
    So, that means that not scaling your data, scaling with a StandardScaler, or with a MinMaxScaler will affect your clustering.

  • @jonathanl7204
    @jonathanl7204 Год назад +2

    Thank you. Better than university teaching

  • @calebsawe8307
    @calebsawe8307 2 года назад +1

    I am super grateful for this video. You are such an excellent teacher! Thank you for being such a "you"

  • @farzanaferdousi9885
    @farzanaferdousi9885 3 года назад +1

    Your explanation is very clear to me and i see all your video, you are very friendly to me. I like you very much.

  • @abhayjoshi2121
    @abhayjoshi2121 2 года назад +1

    You are simply amazing !! I love your style and simplicity and the word is BAM! .. your videos are very informative and worth going through... thanks for all your hard work in simplifying the complex topics

  • @nnnyin6967
    @nnnyin6967 Год назад +1

    I am preparing my actuarial exam and you saved me a lot❤

  • @gurkanyesilyurt4461
    @gurkanyesilyurt4461 4 года назад +1

    you saved yet another day Josh. Thank you

  • @KasperRasmussen-z3y
    @KasperRasmussen-z3y Год назад

    This channels is truly a treasure trove! I was wondering if you could do a video on consensus clustering? I.e. how to evaluate clustering across multiple models and parameters. You are awesome!

  • @marahakermi-nt7lc
    @marahakermi-nt7lc Год назад +2

    ohh my god thanks josh u are so brilliant i think marvel should add another new superhero "josh starmer the life saver"

  • @vishk123
    @vishk123 Год назад +1

    Thank you for allowing me to ascend the stats hierarchy!

  • @congchen170
    @congchen170 7 лет назад

    Joshua's video is always helpful. Next time, probably k-means clustering.

  • @anastasiyakuznetsova8797
    @anastasiyakuznetsova8797 3 года назад +1

    The best as always! Love this channel! It's super easy to understand

  • @mountainsunset816
    @mountainsunset816 Год назад +2

    The opening is always funny

  • @manuelsokolov
    @manuelsokolov Год назад +1

    Dear StatQuest! Thank you for the explanation.
    1. What is the best would you would evaluate the algorithm (silluete score,...) to decide which clustering method and distance to use ( i undestand that silluete score is good to choose the number of k but not to decide between algorithms)?
    To decide the best algorithm i have been ploting PCA and color label by clusters created this way understanding if the clusters make sense or not? (however it is known by literature that PCA does not work well to evaluate binary data)
    2. In the case that the data is binary, (e.g instead of expression data, genomic alteration data) what kind of distance would you use?
    Best Regards, Manuel

    • @statquest
      @statquest  Год назад

      1) I guess it depends. If I had "training" data, with known categories, I would compare how many times the data were correctly and incorrectly grouped. Otherwise, it really just boils down to subjective preference.
      2) If you measure a lot of things, the euclidian distance will still work in this situation.

  • @ramsha8540
    @ramsha8540 8 месяцев назад

    10:08 do you have any videos that talk about clustering in R?
    Thankyou for all your explanations btw!!

    • @statquest
      @statquest  8 месяцев назад

      Unfortunately, no. :(

  • @zzzluke8906
    @zzzluke8906 Год назад +1

    Hi Josh, amazing video as always. Think you can come up with video on how to determine the best number of clusters to have? I get the Elbow method, but I really struggle with the inconsistent method. I was looking at the inconsistency coefficients, and I am confused to do they include singleton clusters, or are singleton clusters excluded. I am also confused about what exactly is the "jump" in the inconsistent coefficient that we are supposed to look out for.

    • @statquest
      @statquest  Год назад

      I'll keep that topic in mind.

  • @datdao6982
    @datdao6982 3 года назад

    Hi just a question. At 7:16, if I'm not mistaken, then gene 1 and 2 are analogous to variable 1 and 2( aka x & y in 2-dimension dataset). So shouldn't the distance be sqrt( (x1-x2)^2 + (y1-y2)^2 ) or sqrt( (1.6-0.5)^2 + (-0.5+1.9)^2 ) ? Sorry if it may seem a stupid question, but since I'm not that good at maths in general I need to turn everything into the basics to understand. Thank you

    • @statquest
      @statquest  3 года назад +1

      In this example we are trying to find how how similar (or different) Gene 1 is to (or from) Gene 2 across all samples, so we are comparing the distances between Gene 1 and Gene 2 in both samples. In other words, if both genes have similar values in Sample #1 and similar values in Sample #2, then we will consider both genes to be similar. In contrast, if the values for Gene 1 and 2 are different from each other in Sample #1 and different from each other in Sample #2, then we will consider the genes to be very different from each other. Thus, we are looking at the difference in gene within each sample.
      In contrast, you are asking to look at the sample differences within each gene. This would tell us that Sample #1 and Sample #2 are similar or not, and, in this example, we are not interested in that. Does that make sense?

    • @datdao6982
      @datdao6982 3 года назад +1

      @@statquest I kinda get it. Thank you.

  • @Paulamiz
    @Paulamiz 3 года назад +2

    Watching this after watching your more recent videos. Missed your 'BAM's a lot!!! You should remake these old videos again! Thanks :)

    • @statquest
      @statquest  3 года назад +2

      bam! :)

    • @Paulamiz
      @Paulamiz 3 года назад +2

      @@statquest 😍

    • @vakarthi4
      @vakarthi4 3 года назад

      Found this gem of a channel today. Agreed on the fun rhymes and puns.

  • @99harshini
    @99harshini 5 лет назад +6

    Absolutely brilliant..Thank you sooo much for your time and effort!

  • @옹늬야아
    @옹늬야아 11 месяцев назад +1

    You saved my life😇 Thank you very much.
    And I think the link for the sample code in R isn't available right now...

    • @statquest
      @statquest  11 месяцев назад

      Yep, that's a really old link. Here's a new one: statquest.org/statquest-hierarchical-clustering/

  • @2327853
    @2327853 5 лет назад +2

    @StatQuest please explain probability and Naive Bayes. Thanks in advance! I am a huge fan of your way of teaching and your small songs creations. Keep up the good work!

  • @tudorpricop5434
    @tudorpricop5434 Год назад

    At 7:28, we calculated the number 3.2 being the difference between gene 1 and gene 2. But the whole purpose of calculating is to figure out which gene is the most similar with gene 1 (for example).
    Now my question: After we compute the values between [gene 1 and gene 2], [gene 1 and gene 3] and [gene 1 and gene 4], we select the gene with the SMALLEST VALUE as the most similar gene to gene 1 ? Or the BIGGEST VALUE ? I think the smallest, but just to be sure..

    • @statquest
      @statquest  Год назад

      In this case we want the smallest distance, which means the most similar.

  • @proggenius2024
    @proggenius2024 8 месяцев назад +1

    awesome content and delivery

    • @statquest
      @statquest  8 месяцев назад

      Glad you think so!

  • @kurniadi-5492
    @kurniadi-5492 2 года назад +2

    it doesn't define if it's must be from the shortest Euclidean or what and basically what makes the dendogram become shorter from another

    • @statquest
      @statquest  2 года назад

      I'm not sure I understand your comment. Can you clarify?

  • @eamiller12
    @eamiller12 2 года назад +1

    THANK YOU! This is has been SO HELPFUL!

  • @govamurali2309
    @govamurali2309 3 года назад

    Josh, how do we figure out the colors in the first place? @8:47..Say we measure the genes. Red denotes value from 0.8-1, blue denotes values from 0.1-0.2. Am I right?

    • @statquest
      @statquest  3 года назад +1

      The coloring is actually arbitrary. Usually we like to have a gradient from the maximum value to the minimum value, but there is no rule that says we should only use 2 colors. We could use 3 or more. The idea is simply to create an image that is informative and useful.

    • @govamurali2309
      @govamurali2309 3 года назад

      @@statquest Thanks Josh!!

  • @aggelosdidachos3073
    @aggelosdidachos3073 4 года назад

    Hello, I am Angelos Didachos and I have a question for StatQuest. 9:54 Is the way of comparing the point to the cluster same as before? That is, Manhattan distance, Euklidian distance etc ?

  • @shamanthrajreddy1230
    @shamanthrajreddy1230 2 года назад +1

    Excellent explanation!

  • @saipanchajanya5980
    @saipanchajanya5980 4 года назад +1

    This is Awesome......
    Please Make a session on K Modes, KNN and K Prototypes

    • @statquest
      @statquest  4 года назад

      Here's a complete list of my videos so far: statquest.org/video-index/

  • @ankitabhavsar886
    @ankitabhavsar886 6 месяцев назад +1

    the intro.......nice one bro🖐

  • @emamulmursalin9181
    @emamulmursalin9181 3 года назад +2

    Great explanation Josh! Just one question, are we clustering samples(data points) or the Genes(features)? If we are clustering Genese, does not it mean that we are just clustering the correlated features?

    • @statquest
      @statquest  3 года назад

      In this video we are clustering the genes, and yes, the idea is that correlated features are brought together. We could even just calculate the correlation coefficient for each pair and cluster based on those values.

    • @emamulmursalin9181
      @emamulmursalin9181 3 года назад

      @@statquest Thanks for your reply.
      But I have seen some other blogs where authors are plotting 2D data points and using hierarchical clustering. So in real life we use hierarchical clustering for data clustering or feature clustering?

    • @statquest
      @statquest  3 года назад

      @@emamulmursalin9181 I'm not sure what you mean by "data" clustering, however, we can cluster the rows or the columns with similar ease. It doesn't matter if one is features and the other is samples.

    • @emamulmursalin9181
      @emamulmursalin9181 3 года назад

      @@statquest Sorry for using an unclear term. Actually I meant "samples" by using the term "data".
      So, can hierarchical clustering be used for "feature clustering" (for example, finding correlated features and remove the redundant features) and also as "sample clustering" (e.g. just like K means clustering ) ?

    • @statquest
      @statquest  3 года назад +1

      @@emamulmursalin9181 Yes. We can cluster the rows just as easily as we cluster the columns.

  • @rodrigohaasbueno8290
    @rodrigohaasbueno8290 5 лет назад +1

    I love this channel so much

  • @davidcartwright337
    @davidcartwright337 5 лет назад +2

    great videos, I like the way you explain these topics

  • @hamidkiangaikani
    @hamidkiangaikani 3 года назад +1

    4.4 K likes, zero dislikes! You're awesome. Thanks very much

  • @MihirSriramVadali
    @MihirSriramVadali 5 месяцев назад

    Great channel. Clearly explained all most all the topics i watched on ML. Here one question what does gene stands for is it features of the data ?

    • @statquest
      @statquest  5 месяцев назад

      Yes, it's a feature.

  • @mayconmarcao4554
    @mayconmarcao4554 2 года назад +1

    Hey Josh, what is the difference between PCA and Hierarchical Clustering? Could you give me an example for each one? I know some people say " PCA groups variables " and "HC groups obsvervations". I think the output from each one represent that exaplanation. But it seems we could use both techniques to answer the same question...

    • @statquest
      @statquest  2 года назад +1

      Although both methods can be applied to the exact same problem (and frequently are both applied to the same problem), they have different strengths. PCA, for example, has loading scores, which would tell us how much each individual variable contributes to the clustering. In contrast, hierarchical clustering gives us a nice heatmap style graph that makes it easy to see the big picture in how and why things are similar and different. I say "try them both."

    • @mayconmarcao4554
      @mayconmarcao4554 2 года назад +1

      @@statquest BAMM! I got it. Thank you Sir!

  • @lazyboy7521
    @lazyboy7521 3 года назад

    Great video! There is a minor mistake around 8:21. You should replace "sample" by "gene" in calculating the distance, i.e., |difference in gene #1| + |difference in gene #2| +...

    • @statquest
      @statquest  3 года назад

      I believe the video is correct. For details, see: 6:01

  • @moikanal4625
    @moikanal4625 2 часа назад

    thanks for amazing lessons

  • @balajicanchi5538
    @balajicanchi5538 7 лет назад

    Explained in a simple manner.

  • @sickleharvestsleeks
    @sickleharvestsleeks 3 года назад

    9:44 average clusters is mean linkage; centroid is centroid of a cluster?

    • @statquest
      @statquest  3 года назад

      I'm not sure I understand your question.

  • @isha996
    @isha996 6 лет назад +1

    Please add a video on Latin Square design, Joshua!
    I am going to pass my stats final tomorrow, only because of your videos :D
    your students are lucky.

    • @isha996
      @isha996 6 лет назад +1

      The CPA and clustering question was worth 30% of total marks on my exam today, and I managed to write them so well only because of your videos. you're a savior. Thank you!!

  • @mikecy5507
    @mikecy5507 2 года назад

    Great channel! Clear explanations. In HCA, could you not follow up the clustering of rows (genes) by clustering the columns (samples)? Is this automatically done? Does not seem like the best heatmap would be produced if you just cluster/shuffle rows. Would have to cluster/shuffle columns, too, right? Also, must/should the data be standardized first?

    • @statquest
      @statquest  2 года назад

      You can cluster both columns and rows. And sometimes standardizing helps, sometimes it doesn't. It's worth trying both options.

    • @mikecy5507
      @mikecy5507 2 года назад +1

      @@statquest Thanks!

  • @oliviagallupova9199
    @oliviagallupova9199 5 лет назад +1

    You saved me a week

  • @fellsantfernandoargentin2072
    @fellsantfernandoargentin2072 6 лет назад

    Congratulations from Brazil!

  • @fabiomaia3433
    @fabiomaia3433 4 года назад +3

    Hey Josh! Your videos are great! Thank you for the effort you've put on it!
    If you allow me... have you considered making videos explaining DBSCAN and HDBSCAN?

    • @statquest
      @statquest  4 года назад +2

      Yes, I've thought about those topics and may make a video about them.

  • @muhammadiqbalmarzuki
    @muhammadiqbalmarzuki 4 года назад +1

    This video is super duper bam bam double double bam!
    Will you cover more advanced clustering techniques such as model-based clustering (MCLUST) and weighted gene co-expression network analysis (WGCNA)? I'm learning about these things now for my research, and will be very grateful if you can cover these topics for me. Thanks! :)

  • @preranadas4037
    @preranadas4037 4 года назад +4

    Hello Josh! The videos are soooooooo goooood! These are BAMMMMM Good!!
    1 request - Could you please create a video on LCA - Latent Class Analysis? Maybe by comparing it to k-means clustering? I cannot be more thankful!

  • @LetWorkTogether
    @LetWorkTogether 5 лет назад +3

    I love this. Your video is wonderful!

  • @MrKingoverall
    @MrKingoverall 5 лет назад +2

    I LOVE YOU JOSH !

  • @jovanmampusti4025
    @jovanmampusti4025 3 года назад +1

    Thank you so much sir! This is very helpful and very informative.

    • @statquest
      @statquest  3 года назад +1

      Glad it was helpful!

  • @robertogff
    @robertogff 4 года назад

    Congratulations! your video is so great! you explain is a very clear and simple way.

  • @HanyMostafa-sk9ml
    @HanyMostafa-sk9ml 2 дня назад

    The part that I don’t understand is the top blue and orange, did you apply hierarchical classification on the genes and on the samples ?

    • @statquest
      @statquest  2 дня назад

      What time point in the video, minutes and seconds, are you asking about?

  • @alyssawang144
    @alyssawang144 3 года назад +1

    fantastic explanation, thank you so much for this video.

  • @cfonsecaparis812
    @cfonsecaparis812 3 года назад +1

    Hi Josh, I am really enjoying your videos specially the wha whas and bam !! , you make stats sound easy but also fun! Thank you! I wonder if you could please do a video to explain the different uses of PCA and HCA, when do you use one or the other? In the mean time I will watch your videos on PCA and HCA :) hooray!

    • @statquest
      @statquest  3 года назад

      BAM! Thank you very much! I'll keep that topic in mind.

  • @CapoeiraPiper
    @CapoeiraPiper 4 года назад +1

    Man your videos are soo super helpful! THANK YOU (ps consider the color library viridis to make it easier for the colorblind)

  • @python_information601
    @python_information601 3 года назад +1

    Nice explanation 👍👍

  • @mojtabasardarmehni453
    @mojtabasardarmehni453 3 года назад +1

    Great as always! Thanks.

  • @the_data_panda
    @the_data_panda 5 лет назад +2

    @StatQuest with Josh Starmer, in this video you are clustering and combining genes (the attributes of data), aren't you supposed to cluster and combine the samples? that's the inverse of the approach shown

    • @statquest
      @statquest  5 лет назад +5

      You can cluster the samples or the genes, or both! It all depends on the question you are asking. For example, if I have some healthy people and some sick people, I might be interested in clustering the people (to see if healthy people form one cluster and unhealthy people form another) or I might be interested in clustering the genes. In this case I would find out which genes are correlated and up-regulated in healthy people compared to unhealthy people. Or I could do both. Does that make sense?

  • @saikiranjajula2033
    @saikiranjajula2033 4 года назад +1

    Thank You Sir, It was awesome to learn from you.

  • @ardaugurlu8673
    @ardaugurlu8673 6 лет назад +2

    Good job mr josh.

  • @subhabrataghosh9831
    @subhabrataghosh9831 3 года назад +1

    Excellent Sir

  • @patriciacontreras8435
    @patriciacontreras8435 8 месяцев назад

    Thank you very much!🥰 You saved my life 🥲
    I have a question, if my dataset has continuous variables (ex. income) and a discrete variable (ex. number of children in the household). How can I measure the distance between them? Thank you!!!

    • @statquest
      @statquest  8 месяцев назад +1

      You can use one-hot-encoding ruclips.net/video/589nCGeWG1w/видео.html or you can use a random forest to do the clustering ruclips.net/video/sQ870aTKqiM/видео.html

    • @patriciacontreras8435
      @patriciacontreras8435 8 месяцев назад +1

      @@statquest Thanks again! I think I will learn a lot if I subscribe to this channel 🥰🥰

  • @urjaswitayadav3188
    @urjaswitayadav3188 7 лет назад

    Great explanation. Thanks StatQuest!

  • @huikianong1695
    @huikianong1695 3 года назад

    Hi may I know how about the clustering of the column? Is it possible to cluster the column and row at the same time? Correct me if I am wrong, clustering the row meaning to group genes that have similar expression together from different sample ? Clustering column meaning to group the samples with similar gene expression?

    • @statquest
      @statquest  3 года назад

      Sure! You can cluster both the rows and columns at the same time.

  • @tymothylim6550
    @tymothylim6550 3 года назад +1

    Thank you very much for this video! It was really well done :)

  • @sonakshigarg4273
    @sonakshigarg4273 5 лет назад

    You can explain the same concept with may be some other datasets and better visualisation other than heatmap

  • @taleco21
    @taleco21 3 года назад

    Hey, Josh, is there any video in which you address unsupervised and supervised hierarchical clustering of gene and lincRNA expressions? If not, could you do a video about that or provide me with some links to read about? I can't find any. Thanks.

    • @statquest
      @statquest  3 года назад +1

      This video is unsupervised hierarchical clustering.

    • @taleco21
      @taleco21 3 года назад +1

      @@statquest oh, yeah, thanks. I just did some readings about unsupervised and got more info. I’ll keep searching for supervised clustering. Thanks a lot! Great video.

  • @setareht7546
    @setareht7546 3 года назад

    Thank you for all your videos clearly explaining complex concepts. Can you also make video(s) on different bi-clustering methods?

    • @statquest
      @statquest  3 года назад

      I'll keep that in mind.

  • @charliekpeng
    @charliekpeng 2 года назад

    How do you tell whether using Euclidean or Manhattan Distance would be more insightful without having to run both?

    • @statquest
      @statquest  2 года назад +1

      Sometimes you know from how the data are generated (are you comparing commute times in manhattan? then use the manhattan distance) but usually you have to run both.

  • @糜家睿
    @糜家睿 6 лет назад +1

    Hi, Joshua. Do you know the basics of pseudotime analysis in single-cell RNA-seq. Can you make a short video talking about the basics? Thanks!

    • @statquest
      @statquest  6 лет назад +1

      I'll put that on the to-do list!

  • @LBsCuriosity
    @LBsCuriosity 5 лет назад

    really awesome video! This will help me with my test. Thank you!

  • @yyma8037
    @yyma8037 4 года назад

    Great video!
    Do you have any plans to talk about co-clustering, look forward to it.

  • @daminithandele7237
    @daminithandele7237 4 года назад +1

    Hi Josh! Can you please make a video on DBSCAN, if possible? Especially the parameter tuning part of it, I'm sure that would be of great help to lots of people.

    • @statquest
      @statquest  4 года назад

      I'll keep that in mind.

  • @monishaap08
    @monishaap08 5 лет назад

    How to validate these clustering techniques? I mean for a given dataset, let’s assume I have tried various hierarchical clustering techniques like single linkage, complete linkage, etc using various distance matrix for each method. How to pick the right one from all these different clusters which has been formed for that particular dataset

    • @statquest
      @statquest  5 лет назад

      This is going to sound very disappointing, but since these methods are generally used to explore data and extract new insights from it, then you pick the method that gives you the most insight. So try them and see if one makes more sense than the others.

  • @alexiasantos5526
    @alexiasantos5526 4 года назад

    Hello Josh. If I have several categorical variables like "yes" or "no", which cluster do I have to use? Or is the cluster method not the best for categorical variables, if not, why? Thanks you!

    • @statquest
      @statquest  4 года назад

      The trick is that you need a distance metric that works with categorical variables. The standard distance, the euclidian distance, is not very good for categorical variables.

    • @alexiasantos5526
      @alexiasantos5526 4 года назад

      @@statquest Thanks for answer! What would be the distance metric for categorical variable?

    • @statquest
      @statquest  4 года назад +1

      @@alexiasantos5526 There's something called a Gower Similarity coefficient that might work with your data. See: stats.stackexchange.com/questions/15287/hierarchical-clustering-with-mixed-type-data-what-distance-similarity-to-use/15313#15313

  • @maikfranke2303
    @maikfranke2303 2 года назад +1

    Amazing! Your Videos are so much comrehensible. I really enjoy watching!!!*_*

  • @dingodin101
    @dingodin101 5 месяцев назад

    hi, can you please give a tutorial about the Mahalanobis/statistical distance logic and calculation?
    Thanks,
    - Dean

    • @statquest
      @statquest  5 месяцев назад

      I'll keep that in mind.

  • @12bjab
    @12bjab 5 лет назад +2

    just beautiful!

  • @veloisamascarenhas7531
    @veloisamascarenhas7531 6 лет назад +1

    how can clustering be applied on spectral data?

  • @ahmad3823
    @ahmad3823 7 месяцев назад

    So, in each step, we have an upper triangle matrix of distances between current clusters and only combine the closest two and then repeat the process!? Silly in terms of computations!