Gibbs Sampling : Data Science Concepts

Поделиться
HTML-код
  • Опубликовано: 12 янв 2025

Комментарии • 76

  • @Adam-ec9dk
    @Adam-ec9dk 3 года назад +78

    I like that you wrote all the major points on the board and fit everything into one slide. Super easy to take a screenshot so I can remember the gist of the video.

  • @ResilientFighter
    @ResilientFighter 3 года назад +37

    Ritvik, your videos are ranking the top when a person searches "metropoolis hastings" and "gibbs sampling". Great job man!

  • @musclesmalone
    @musclesmalone 3 года назад +10

    fantastic concise explanation, excellent visualisations. it's also very appreciated that everything is written prior to recording so there isn't thousands of people ( and in some cases millions ) waiting while watching you draw a graph or write a formula. Huge appreciation for your work, thank you!

  • @rahulchowdhury9739
    @rahulchowdhury9739 Год назад +2

    You're one of the best teachers of statistics. Thanks for taking the time to share the way you understand theories and problems.

  • @md.salahuddinparvez6578
    @md.salahuddinparvez6578 6 месяцев назад

    In our masters course of Pattern Analysis in of the top ranking universities of Germany, the professor has actually put a link to this video in the slides. And after watching the video, I understand why. You have done a great job explaining, thank you !

  • @Ciavi-ar
    @Ciavi-ar 11 месяцев назад +1

    This did actually help to finally wrap my brain around this topic. Thanks!

  • @des6309
    @des6309 3 года назад +1

    dude you're so talented at explaining

  • @rmb706
    @rmb706 10 месяцев назад +1

    I had to write a Gibbs sampler for my Bayes midterm. That moment when I checked it with PyMC and it was spot on first attempt just felt amazing. 🎉 🔥

  • @DhruveeChauhan
    @DhruveeChauhan Год назад +1

    You are literally saving us one day before an exam!

  • @shuangli5466
    @shuangli5466 Год назад

    Thank you for giving me probably 15 marks on my exam and lower my probability of failing from 10% to 5%

  • @王帝森
    @王帝森 3 года назад +2

    Thanks, you are soooooo good at explaining. I will recommend my professor to take a look at your videos.

  • @ksenyaisavnina
    @ksenyaisavnina 2 года назад

    This high density bubble is like a supermassive black hole, once you get there, you'd never go out :)

  • @adamtran5747
    @adamtran5747 2 года назад

    absolutely love the content brother. Please keep up the amazing work.

  • @bachi5373
    @bachi5373 5 месяцев назад

    What a very clear explanation. Thanks a lot!

  • @AdrianYang
    @AdrianYang 3 года назад

    Thank you for your video, Ritvik. Can I understand this as: search within a multi-dimension space is difficult because there are infinite choices of directions, while by fixing all the other dimensions and only leaving one movable, search within one dimension space becomes super easy because there are only two choices of directions.

  • @christophersolomon633
    @christophersolomon633 3 года назад +1

    Excellent video - wonderfully clear.

  • @Reach41
    @Reach41 4 года назад +1

    This is one of the few channels left where p(x), with p(1) = Democrat, etc, is not a factor. Now to apply this to LIDAR ranging to produce either a Bayesian occupancy grid or a point cloud. Laser beams expand in diameter and lose energy (in air) going out from the device lens, vary in intensity both as the distance increases, and independently across the beam as a function of both horizontal and vertical beam width.

  • @thename305
    @thename305 Год назад

    Excellent video, your explanation was clear and helpful!

  • @salahlaaroussi9896
    @salahlaaroussi9896 2 года назад +1

    really well explained. Nice job!

  • @aalailayahya
    @aalailayahya 3 года назад +1

    Great video, keep up the work I love it

  • @squidgeypea
    @squidgeypea 3 года назад

    Thank you! Your videos are all really helpful and well explained.

  • @anushaavyukt6381
    @anushaavyukt6381 3 года назад +3

    Hi Ritvik, Thanks for such a clear explanation. Would you please make a video on EM algorithm? I saw a lot of videos on it and understand the basics but not sure how to implement it for any problem.Thanks a lot.

  • @MirGlobalAcademy
    @MirGlobalAcademy 3 года назад +2

    Simple Explanation. Just like spoon feeding -Goood

  • @vitorsantana2795
    @vitorsantana2795 2 года назад

    You just saved my ass so hard right now. Thanks a lot

  • @PeterSmitGroningen
    @PeterSmitGroningen 4 года назад +2

    With the "probably spikes" example, I think a more formal explanation would be "steep gradient" or lack of gradient even. Many approximation techniques have problems with steep or sudden gradients, think neural networks

    • @ritvikmath
      @ritvikmath  4 года назад +1

      thanks for putting a name to it! Indeed, many ML algorithms and stat methods are not happy with quick, unexpected changes.

  • @Mv-pp7is
    @Mv-pp7is Год назад

    This is incredibly helpful, thank you!

  • @mikeshin77
    @mikeshin77 2 года назад

    fantastic and easy explanation. I like the way to explain!

  • @monicamilagroshuaytadurand2076
    @monicamilagroshuaytadurand2076 3 года назад

    Thank you very much! Your explanation helped me a lot!

  • @praveenkumarkazipeta
    @praveenkumarkazipeta Год назад

    this post is awesome, keep going

  • @edwardhartz1029
    @edwardhartz1029 2 года назад

    At around 4:30 , you started at (x0,y0), but then the value of x0 was never used. Why is this?

    • @vs7185
      @vs7185 2 года назад

      I am thinking you can use either one to start the process. If you are using x0, then next you will use p(y1 | x0); in case you are using y0, then next you will use p(x1 | y0)

  • @RollingcoleW
    @RollingcoleW 2 года назад

    Thank you! I am a hobbiest and this is helpful.

  • @mrocean1293
    @mrocean1293 3 года назад

    Great explanation, love it!

  • @snehanjalikalamkar2268
    @snehanjalikalamkar2268 2 года назад +4

    Hey Ritvik, your videos are very helpful, I learned a lot from them.
    Could you also provide some references for some points that you don't cover (mostly for pre-requisites)?
    In this video, I could not find out why p(x|y) = N(ρy, 1 - ρ²)? Could you please provide a reference for this?

  • @rodrigoaguilardiaz
    @rodrigoaguilardiaz 4 месяца назад

    one question, what if i have no idea of the correlation between the variants?, and actually, that's the thing that i want to find. Can i combine this method and also use the metropolis algorithm to find the values of mu and sigma and calculate p every iteration or something like that?
    thanks!

  • @dddd-ci2zm
    @dddd-ci2zm 3 года назад

    Thank You! I finally understand it now !

  • @tsen6367
    @tsen6367 2 года назад +2

    Hello sir.. first things first, I want to say thank you very much for your incredible explanation through your videos.
    I am currently working on my thesis which use hierarchical Bayesian method, but I still confused and don't understand how to determin the right prior for my data. If you don't mind and have a free time, can I discuss with you through social media? I really need someone to guide me🙏 Thank you very much in advance sir.

  • @서영빈-u2e
    @서영빈-u2e Месяц назад

    Very clear! thank you so much

  • @marcoantoniocoutinho
    @marcoantoniocoutinho 3 года назад +2

    Great video, thanks. How could I associate (conceptually or intuitively) GIBBS sampling with variable's Markov Chain modeling once I'm building a sampling based on their conditional probability?

  • @vs7185
    @vs7185 2 года назад

    Is there no accept reject here like in Metropolis Hastings or Rejection sampling?

  • @eduardo.garcia
    @eduardo.garcia 2 года назад

    Thanks a lot for all your videos!!! Please do Hamiltonian Monte Carlo Next, please :D

  • @chainonsmanquants1630
    @chainonsmanquants1630 3 года назад

    Am I right if I say that Gibbs sampling is possible only when you know the marginal probability distribution for each variable ?

  • @filosofiadetalhista
    @filosofiadetalhista 2 года назад

    Tight video. Thanks!

  • @LL-lb7ur
    @LL-lb7ur 2 года назад

    Thank you for the video. What real life problems can you use gibbs sampling, and what do you get at the end of sampling?

  • @Danielschuko
    @Danielschuko 3 месяца назад +1

    I kiss your heart brother! 🙏🙏🙏

  • @prof1math
    @prof1math 3 года назад +1

    great explanation keep it up thanks

  • @shahf13
    @shahf13 4 года назад +2

    great channel ! can you do a video about autoencoders?

  • @Alexander-pk1tu
    @Alexander-pk1tu 3 года назад

    thank you! Very good video

  • @princessefleure8360
    @princessefleure8360 3 года назад

    Thank you soo much for this video, it helps me a lot!
    I just had a quesiton, if I well undertood, if we have 3 variables we have to calculate p(x|(y,z))
    But how to know the "p" in this case, because I guess we need a 3*3 covariance matrix.
    Have a good day!

  • @leohsusolid
    @leohsusolid 2 года назад

    Great videos! Make the concept very clear! Thank you!
    I have a question about the correction: After sampling (X0, Y0), how can we sample (X1, Y1)? In other words, what is the condition when we change both? Or just sample X1, Y1 respectively?

    • @leohsusolid
      @leohsusolid 2 года назад

      The other question is that if we go from (X0, Y0) to (X1, Y1), then we don't face the situation of "Probability Spike", do we?

    • @apah
      @apah 2 года назад

      The reason he made the correction is that what we call a sample is (xi, yi). Therefore an iteration of Gibbs is the update to both variables with the method he gave; sampling x1 given y0 then y1 given x1.

    • @leohsusolid
      @leohsusolid 2 года назад

      @@apah Thank you for replying me!
      Do you mean that we can sample (X1, Y1), but actually in this sample, there is an order which is X1 first given by Y0, Y1 given by X1.

    • @apah
      @apah 2 года назад

      @@leohsusolid My pleasure ! Exactly, starting with either one is fine. As a said earlier, a sample is by definition the pair (Xi, Yi). The point of gibbs sampling is to find a way to make these samples grow closer and closer to samples drawn from the actual distribution P(X, Y). And the method to do so, is to alternatively sample from the the conditional distributions.

  • @juanpabloaguilarcabezas8089
    @juanpabloaguilarcabezas8089 3 года назад +1

    Can you do a video on hamiltonian monte carlo ?

  • @ChenmingPu
    @ChenmingPu 4 месяца назад

    you r my hero

  • @shirleygui6533
    @shirleygui6533 Год назад

    so clear

  • @BreezeTalk
    @BreezeTalk 2 года назад

    Please show a code implementation

  • @AleeEnt863
    @AleeEnt863 Год назад

    A big thanks!

  • @apica1234
    @apica1234 3 года назад

    Could you please explain hands-on?

  • @cleansquirrel2084
    @cleansquirrel2084 4 года назад +4

    i'm watching

  • @Abhilashaisgood
    @Abhilashaisgood 7 месяцев назад

    amazing

  • @Gasgar800
    @Gasgar800 3 года назад

    Sick ! thanks

  • @MoodyMooMoo
    @MoodyMooMoo Год назад

    Thanks!

  • @AshokKumar-lk1gv
    @AshokKumar-lk1gv 4 года назад +2

    nice