Это видео недоступно.
Сожалеем об этом.

Gibbs Sampling : Data Science Concepts

Поделиться
HTML-код
  • Опубликовано: 26 янв 2021
  • Another MCMC Method. Gibbs sampling is great for multivariate distributions where conditional densities are easy to sample from.
    To emphasize a point in the video:
    - First sample is (x0,y0)
    - Next Sample is (x1,y1)
    - Next Sample is (x2,y2)
    ...
    That is, we update all variables once to get a new sample.
    Intro MCMC Video : • Markov Chain Monte Car...

Комментарии • 71

  • @Adam-ec9dk
    @Adam-ec9dk 3 года назад +71

    I like that you wrote all the major points on the board and fit everything into one slide. Super easy to take a screenshot so I can remember the gist of the video.

  • @ResilientFighter
    @ResilientFighter 3 года назад +37

    Ritvik, your videos are ranking the top when a person searches "metropoolis hastings" and "gibbs sampling". Great job man!

  • @musclesmalone
    @musclesmalone 3 года назад +9

    fantastic concise explanation, excellent visualisations. it's also very appreciated that everything is written prior to recording so there isn't thousands of people ( and in some cases millions ) waiting while watching you draw a graph or write a formula. Huge appreciation for your work, thank you!

  • @rahulchowdhury9739
    @rahulchowdhury9739 Год назад +2

    You're one of the best teachers of statistics. Thanks for taking the time to share the way you understand theories and problems.

  • @Ciavi-ar
    @Ciavi-ar 6 месяцев назад +1

    This did actually help to finally wrap my brain around this topic. Thanks!

  • @user-me9mw5oc7u
    @user-me9mw5oc7u 3 года назад +2

    Thanks, you are soooooo good at explaining. I will recommend my professor to take a look at your videos.

  • @DhruveeChauhan
    @DhruveeChauhan Год назад +1

    You are literally saving us one day before an exam!

  • @des6309
    @des6309 3 года назад +1

    dude you're so talented at explaining

  • @md.salahuddinparvez6578
    @md.salahuddinparvez6578 Месяц назад

    In our masters course of Pattern Analysis in of the top ranking universities of Germany, the professor has actually put a link to this video in the slides. And after watching the video, I understand why. You have done a great job explaining, thank you !

  • @adamtran5747
    @adamtran5747 Год назад

    absolutely love the content brother. Please keep up the amazing work.

  • @squidgeypea
    @squidgeypea 2 года назад

    Thank you! Your videos are all really helpful and well explained.

  • @thename305
    @thename305 Год назад

    Excellent video, your explanation was clear and helpful!

  • @salahlaaroussi9896
    @salahlaaroussi9896 2 года назад +1

    really well explained. Nice job!

  • @aalailayahya
    @aalailayahya 3 года назад +1

    Great video, keep up the work I love it

  • @Mv-pp7is
    @Mv-pp7is Год назад

    This is incredibly helpful, thank you!

  • @monicamilagroshuaytadurand2076
    @monicamilagroshuaytadurand2076 2 года назад

    Thank you very much! Your explanation helped me a lot!

  • @AdrianYang
    @AdrianYang 3 года назад

    Thank you for your video, Ritvik. Can I understand this as: search within a multi-dimension space is difficult because there are infinite choices of directions, while by fixing all the other dimensions and only leaving one movable, search within one dimension space becomes super easy because there are only two choices of directions.

  • @mrocean1293
    @mrocean1293 3 года назад

    Great explanation, love it!

  • @rmb706
    @rmb706 5 месяцев назад

    I had to write a Gibbs sampler for my Bayes midterm. That moment when I checked it with PyMC and it was spot on first attempt just felt amazing. 🎉 🔥

  • @dddd-ci2zm
    @dddd-ci2zm 3 года назад

    Thank You! I finally understand it now !

  • @vitorsantana2795
    @vitorsantana2795 2 года назад

    You just saved my ass so hard right now. Thanks a lot

  • @praveenkumarkazipeta
    @praveenkumarkazipeta 11 месяцев назад

    this post is awesome, keep going

  • @christophersolomon633
    @christophersolomon633 3 года назад +1

    Excellent video - wonderfully clear.

  • @anushaavyukt6381
    @anushaavyukt6381 2 года назад +3

    Hi Ritvik, Thanks for such a clear explanation. Would you please make a video on EM algorithm? I saw a lot of videos on it and understand the basics but not sure how to implement it for any problem.Thanks a lot.

  • @RollingcoleW
    @RollingcoleW Год назад

    Thank you! I am a hobbiest and this is helpful.

  • @shuangli5466
    @shuangli5466 9 месяцев назад

    Thank you for giving me probably 15 marks on my exam and lower my probability of failing from 10% to 5%

  • @filosofiadetalhista
    @filosofiadetalhista 2 года назад

    Tight video. Thanks!

  • @marcoantoniocoutinho
    @marcoantoniocoutinho 3 года назад +2

    Great video, thanks. How could I associate (conceptually or intuitively) GIBBS sampling with variable's Markov Chain modeling once I'm building a sampling based on their conditional probability?

  • @AleeEnt863
    @AleeEnt863 Год назад

    A big thanks!

  • @mikeshin77
    @mikeshin77 Год назад

    fantastic and easy explanation. I like the way to explain!

  • @Alexander-pk1tu
    @Alexander-pk1tu 2 года назад

    thank you! Very good video

  • @eduardo.garcia
    @eduardo.garcia 2 года назад

    Thanks a lot for all your videos!!! Please do Hamiltonian Monte Carlo Next, please :D

  • @shirleygui6533
    @shirleygui6533 Год назад

    so clear

  • @Abhilashaisgood
    @Abhilashaisgood Месяц назад

    amazing

  • @senyaisavnina
    @senyaisavnina 2 года назад

    This high density bubble is like a supermassive black hole, once you get there, you'd never go out :)

  • @MoodyMooMoo
    @MoodyMooMoo 10 месяцев назад

    Thanks!

  • @LL-lb7ur
    @LL-lb7ur 2 года назад

    Thank you for the video. What real life problems can you use gibbs sampling, and what do you get at the end of sampling?

  • @Reach41
    @Reach41 3 года назад +1

    This is one of the few channels left where p(x), with p(1) = Democrat, etc, is not a factor. Now to apply this to LIDAR ranging to produce either a Bayesian occupancy grid or a point cloud. Laser beams expand in diameter and lose energy (in air) going out from the device lens, vary in intensity both as the distance increases, and independently across the beam as a function of both horizontal and vertical beam width.

  • @snehanjalikalamkar2268
    @snehanjalikalamkar2268 Год назад +4

    Hey Ritvik, your videos are very helpful, I learned a lot from them.
    Could you also provide some references for some points that you don't cover (mostly for pre-requisites)?
    In this video, I could not find out why p(x|y) = N(ρy, 1 - ρ²)? Could you please provide a reference for this?

  • @MirGlobalAcademy
    @MirGlobalAcademy 3 года назад +2

    Simple Explanation. Just like spoon feeding -Goood

  • @tsen6367
    @tsen6367 Год назад +2

    Hello sir.. first things first, I want to say thank you very much for your incredible explanation through your videos.
    I am currently working on my thesis which use hierarchical Bayesian method, but I still confused and don't understand how to determin the right prior for my data. If you don't mind and have a free time, can I discuss with you through social media? I really need someone to guide me🙏 Thank you very much in advance sir.

  • @Gasgar800
    @Gasgar800 3 года назад

    Sick ! thanks

  • @princessefleure8360
    @princessefleure8360 3 года назад

    Thank you soo much for this video, it helps me a lot!
    I just had a quesiton, if I well undertood, if we have 3 variables we have to calculate p(x|(y,z))
    But how to know the "p" in this case, because I guess we need a 3*3 covariance matrix.
    Have a good day!

  • @prof1math
    @prof1math 3 года назад +1

    great explanation keep it up thanks

  • @chainonsmanquants1630
    @chainonsmanquants1630 3 года назад

    Am I right if I say that Gibbs sampling is possible only when you know the marginal probability distribution for each variable ?

  • @PeterSmitGroningen
    @PeterSmitGroningen 3 года назад +2

    With the "probably spikes" example, I think a more formal explanation would be "steep gradient" or lack of gradient even. Many approximation techniques have problems with steep or sudden gradients, think neural networks

    • @ritvikmath
      @ritvikmath  3 года назад +1

      thanks for putting a name to it! Indeed, many ML algorithms and stat methods are not happy with quick, unexpected changes.

  • @vs7185
    @vs7185 2 года назад

    Is there no accept reject here like in Metropolis Hastings or Rejection sampling?

  • @anondsml
    @anondsml 10 месяцев назад

    do you offer any tutoring in bayesian statistics?

  • @cleansquirrel2084
    @cleansquirrel2084 3 года назад +4

    i'm watching

  • @shahf13
    @shahf13 3 года назад +2

    great channel ! can you do a video about autoencoders?

  • @leohsusolid
    @leohsusolid 2 года назад

    Great videos! Make the concept very clear! Thank you!
    I have a question about the correction: After sampling (X0, Y0), how can we sample (X1, Y1)? In other words, what is the condition when we change both? Or just sample X1, Y1 respectively?

    • @leohsusolid
      @leohsusolid 2 года назад

      The other question is that if we go from (X0, Y0) to (X1, Y1), then we don't face the situation of "Probability Spike", do we?

    • @apah
      @apah 2 года назад

      The reason he made the correction is that what we call a sample is (xi, yi). Therefore an iteration of Gibbs is the update to both variables with the method he gave; sampling x1 given y0 then y1 given x1.

    • @leohsusolid
      @leohsusolid 2 года назад

      @@apah Thank you for replying me!
      Do you mean that we can sample (X1, Y1), but actually in this sample, there is an order which is X1 first given by Y0, Y1 given by X1.

    • @apah
      @apah 2 года назад

      @@leohsusolid My pleasure ! Exactly, starting with either one is fine. As a said earlier, a sample is by definition the pair (Xi, Yi). The point of gibbs sampling is to find a way to make these samples grow closer and closer to samples drawn from the actual distribution P(X, Y). And the method to do so, is to alternatively sample from the the conditional distributions.

  • @AshokKumar-lk1gv
    @AshokKumar-lk1gv 3 года назад +2

    nice

  • @juanpabloaguilarcabezas8089
    @juanpabloaguilarcabezas8089 3 года назад +1

    Can you do a video on hamiltonian monte carlo ?

  • @BreezeTalk
    @BreezeTalk 2 года назад

    Please show a code implementation

  • @apicasharma2499
    @apicasharma2499 3 года назад

    Could you please explain hands-on?

  • @edwardhartz1029
    @edwardhartz1029 2 года назад

    At around 4:30 , you started at (x0,y0), but then the value of x0 was never used. Why is this?

    • @vs7185
      @vs7185 2 года назад

      I am thinking you can use either one to start the process. If you are using x0, then next you will use p(y1 | x0); in case you are using y0, then next you will use p(x1 | y0)