3. Parametric Inference

Поделиться
HTML-код
  • Опубликовано: 25 ноя 2024

Комментарии • 66

  • @richardmerckling592
    @richardmerckling592 3 года назад +111

    love the tranquil waterfall in these MIT classrooms

  • @hyungjinchung3690
    @hyungjinchung3690 5 лет назад +22

    For those of you who are absolute beginners, yes this course might be quite hard to follow. However for people like me, who have not took statistics or probability theory courses during their undergraduate studies but have some background whatsoever, this course is very insightful and beautifully brought together.

    • @abcdxx1059
      @abcdxx1059 4 года назад +2

      yep i started with 18.600 for this reason

    • @phillustrator
      @phillustrator Год назад

      I'm a data scientist trying to review the basics, and I often have a hard time just figuring out what he's trying to say.
      I don't think he put enough thought into his delivery.
      Lecture notes would be helpful instead of just slides. It's often easier to convey your ideas in writing if you're a novice teacher.

  • @xenosicotte
    @xenosicotte 6 лет назад +31

    Agreed, you will need some background in statistics and calculus to follow. But if you have the pre-requisites this is excellent

    • @dimitargueorguiev9088
      @dimitargueorguiev9088 Год назад +2

      I agree , Philippe Rigollet is very good besides being refreshing and entertaining lecturer. Enjoyed thoroughly the lectures so far.

  • @aliceinwonderchen2423
    @aliceinwonderchen2423 6 лет назад +6

    I really recommend this course! there are a lot of in-depth comments that refresh your understanding about statistic.

  • @arsenyturin
    @arsenyturin 3 года назад +21

    Is there a technology at MIT to effectively remove chalk from the chalkboard?

  • @ovagomes1
    @ovagomes1 6 лет назад +8

    Good job! Such a refreshing lecture and easy to follow if you have background in Statistics.

  • @ChannelMath
    @ChannelMath 10 месяцев назад

    MIT loves their thick chalk, which is great, except when the buildings people don't use the squeegee in between your class and the previous

  • @ariellubonja7856
    @ariellubonja7856 3 года назад +6

    Hi guys! Can you please reupload this with the background static removed?

  • @MuhammadGhufran5
    @MuhammadGhufran5 7 лет назад +3

    Excellent Lecture ! Thank u MIT !

  • @dennisestenson7820
    @dennisestenson7820 2 года назад +3

    MIT can't figure out how to apply a filter to the audio to remove the hiss?

  • @ericgrey4744
    @ericgrey4744 3 года назад +1

    Why is the volume so low on this video? I'm pumped up to full and I can barely hear him.

  • @aniruddhnls
    @aniruddhnls 6 лет назад +25

    1:00:00 shouldn't it be 1-P(Z

    • @yzhang4761
      @yzhang4761 6 лет назад +10

      Yes. Or equivalently Phi(mu / sigma).

    • @CathyZhang
      @CathyZhang 3 года назад +1

      I agree with you. there is a mistake..

    • @sumitkumarsingh5290
      @sumitkumarsingh5290 3 года назад

      How do we calculate the mode if the highest frequency is same two times.

    • @aniruddhnls
      @aniruddhnls 3 года назад +1

      @@sumitkumarsingh5290 Then both of those are modes. That’s the thing about mode, there need not be just one.

    • @formlessspace3560
      @formlessspace3560 Год назад

      @@CathyZhang No they are the same. The normal distribution is symmetric. phi(x)=phi(-x)

  • @laodrofotic7713
    @laodrofotic7713 Год назад +4

    If you take this class my advice is first dont let this be your first stat class, you will be in trouble. This professor might be super clear if you already know what he is explaining but if you dont, he is a terrible professor. The only ones that do good are the ones that already took other courses. He knows his stuff of course, but he is not an efficient communicator, he cant clearly think in the student perspective, and his TAs are just like him too. You will be dazed and confused, just a fair warning.. Plus adding to the above he also makes several mistakes in what he writes, and has the annoying habit of speeding up and talking fast as the content gets more complicated, instead of doing the exact opposite like a professor should. Case in point, at 47:04 professor changes variables to x, should be y. Its a small detail right? But this in newly introduced concepts can cause confusion, and put this together with a fast pace, and you will be struggling a lot of things that are quite easy, but because his communication skills are lacking, the easy becomes confusing and difficult fast.

  • @raining_macondo
    @raining_macondo 6 лет назад +1

    愛我了個去~頭一次看見騎自行車上課的,可以的這很MIT,抱拳了。

    • @ChiGao-r8w
      @ChiGao-r8w 4 года назад +2

      教授脚受伤了,带伤上课。那个“自行车” 是单腿的轮椅

  • @cecilechau7932
    @cecilechau7932 8 месяцев назад

    All models are wrong but some of them are useful!

    • @cecilechau7932
      @cecilechau7932 8 месяцев назад

      Is my data iid? For each individual party - maybe yes!

    • @cecilechau7932
      @cecilechau7932 8 месяцев назад

      Parametric models vs non parametric

  • @realhumphreyappleby
    @realhumphreyappleby 4 года назад +4

    Did MIT clone Joe Blitzstein?

  • @Splatpope
    @Splatpope 6 лет назад +6

    2:00 dang I should go to MIT

  • @mariano5704
    @mariano5704 Год назад

    I don't undertand why MIT publish such a good material with so terrible the audio

    • @mitocw
      @mitocw  Год назад +10

      Let us tell you our tale of woe... it's of materials captured by instructors, outside camera crews, and distance learning systems. Problems of dead batteries, badly patched audio, camera audio from the back of the room, and of AV crews that are not actively monitoring their feeds. We try to do the best we can with the materials we are given. 🤷

  • @varunsaproo4120
    @varunsaproo4120 5 лет назад +1

    at time 1:12:36, the instructor stated the expectation to be P. What is P representing here? In the case of the statistical model, P represents the Family of Distributions......

    • @MrBrij2385
      @MrBrij2385 2 года назад

      Right after saying P, he said "the probability of turning your head right". He alluded to the example of the "Kissing Problem" (or whatever the name was). In that problem, the estimator for P( the parameter of the Bernoulli distribution) was simply the expectation of the random variable X and P was the parameter of the model. But, in general, think of the output of an estimator to be the value of the parameter of the model and you can denote it using some letter like P in this case

    • @weimingsheng3214
      @weimingsheng3214 6 месяцев назад

      because the parameter of bernoullis is generally denoted by p. Note that this p is small cap and P when denoting a distribution is big cap

  • @jiehe6943
    @jiehe6943 7 лет назад +9

    I've watched the first 3 videos so far and I find it's not easy to follow the professor. It looks like I should read through all the slides before coming back to the videos..

    • @YouTubeChannel-jw5th
      @YouTubeChannel-jw5th 5 лет назад +1

      You're supposed to read or at least skim the textbook before lecture, and then again after lecture with the professor's outlines.

    • @abcdxx1059
      @abcdxx1059 4 года назад

      @Neeraj Sahu ae19m023 leme know if you find out

    • @muhammadabdelshafi6909
      @muhammadabdelshafi6909 4 года назад

      @@abcdxx1059 All of statistics - wasserman

  • @normalperson1130
    @normalperson1130 4 года назад +2

    The white noise in the background is the most irritating part of these lectures. Does anyone know a way to denoise these videos? It's so difficult to focus because of this static noise.

    • @Yseerv
      @Yseerv 13 дней назад

      use the slutsky theorem

  • @infiniteen
    @infiniteen 3 года назад

    @1:08:40 - "it is weakly convergent if it converges in probability, and strongly convergent if it converges al naturel?"

  • @ML_n00b
    @ML_n00b 6 месяцев назад

    isnt theta hat an estimator not a statistic?

  • @pablock0
    @pablock0 Год назад

    Nice

  • @oyaoya2468
    @oyaoya2468 Год назад

    what is the text book for this subject

    • @mitocw
      @mitocw  Год назад +2

      No textbook is listed for this course. See the course on MIT OpenCourseWare for more info at: ocw.mit.edu/18-650F16. Best wishes on your studies!

  • @Marteenez_
    @Marteenez_ 2 года назад

    Why are the parameters of one of the normal distributions he describes identifiable and the others aren't?

    • @weimingsheng3214
      @weimingsheng3214 6 месяцев назад

      Because given infinite data you can only pin down the ratio of mu/sigma. This fails to uniquely determine a vector (mu, sigma), since any constant multiplication of mu/sigma is also mu/sigma. It follows that (mu, sigma) is not injective. However, mu/sigma is injective. This makes sense because mu/sigma is intuitively a smaller class than (mu, sigma), though they ofc have the same cardinality

  • @Marteenez_
    @Marteenez_ 2 года назад

    @1:16:00 what is convergence in l2, why does it imply convergence in probability

    • @newtonswig
      @newtonswig 2 года назад

      It means the square of the difference between the limiting distribution and those converging to it, (f_l-f)^2, has integral epsilon, converging to zero.
      Roughly it implies that the in the limit things agree up to a set of measure zero, so that as probability distributions there is no difference.

    • @weimingsheng3214
      @weimingsheng3214 6 месяцев назад

      watch the last lecture

  • @luojihencha
    @luojihencha 3 года назад

    Is there any other statistics course from MIT

    • @mitocw
      @mitocw  3 года назад

      Here is what we have for statistics and probability: ocw.mit.edu/courses/find-by-topic/#cat=mathematics&subcat=probabilityandstatistics. Best wishes on your studies!

  • @ryanchiang9587
    @ryanchiang9587 Год назад

    參數式

  • @mingyang8183
    @mingyang8183 6 лет назад +4

    Excellent instructor but poor hand writing.

  • @cauchydistributed161
    @cauchydistributed161 2 года назад +1

    This Lecture seems to be somewhat lacking in examples and illustrations.

  • @jaspreetsingh-nr6gr
    @jaspreetsingh-nr6gr Год назад

    @ 32:16 here atm

  • @FineFlu
    @FineFlu 4 года назад +3

    Prof seems a bit disorganized otherwise a good lecture series

  • @theodiggers
    @theodiggers 2 года назад

    estiMAYter

  • @pranishramteke7642
    @pranishramteke7642 4 года назад

    Every time this guy says "Poisson" I think of Carl saying croissant
    ruclips.net/video/dg2S08WAIAo/видео.html