Lecture 5 - GDA & Naive Bayes | Stanford CS229: Machine Learning Andrew Ng (Autumn 2018)

Поделиться
HTML-код
  • Опубликовано: 16 апр 2020
  • For more information about Stanford’s Artificial Intelligence professional and graduate programs, visit: stanford.io/ai
    Andrew Ng
    Adjunct Professor of Computer Science
    www.andrewng.org/
    To follow along with the course schedule and syllabus, visit:
    cs229.stanford.edu/syllabus-au...

Комментарии • 54

  • @zZTrungZz
    @zZTrungZz 4 месяца назад +4

    Learning with examples arrived from practical experience, from someone with probably more than a decade of experience, is always wonderful. Thank you for the lecture!

  • @A_Random_Ghost
    @A_Random_Ghost 7 месяцев назад +3

    For anyone who finds this useful.
    A way to interprete the parameter formulas.
    Since phi is the probability of y being 1, the formula is the number of ones divided by the total number which is what we get as the indicator function only counts the ones.
    The mean of the zero class is the sum of the elements of the zero class divided by the number of elements in the class. And since the indicator function only counts those in the class, that's what the formula calculates. Same for the mean of the one class.
    The variance formula is just the standard variance-covariance matrix of a random vector from probability theory.

  • @welcomethanks5192
    @welcomethanks5192 Год назад +13

    GDA start around 39:00
    Naive Bayes 1:04:00

  • @dsazz801
    @dsazz801 Год назад +11

    Thank you for the amazing lecture with super high quality. Easy to understand, well explained, and beautifully organized.

  • @debdeepsanyal9030
    @debdeepsanyal9030 11 месяцев назад +7

    57:25 is where andrew tells about his preference and way of choosing logistic regression and GLA

  • @tampopo_yukki
    @tampopo_yukki 10 месяцев назад +7

    This lecture helped me connect the dots and lay the ground for GDA. Thanks a lot!

  • @kendroctopus
    @kendroctopus Год назад +4

    Thank you so much for this lecture!

  • @mikegher879
    @mikegher879 Год назад +3

    Brilliant #Andrew Thanks

  • @TheBanananutbutton
    @TheBanananutbutton 5 месяцев назад +10

    plot twist: there's only one student: batman

  • @jasperbutcher2596
    @jasperbutcher2596 11 месяцев назад +2

    Anyone know how to access the problem sets? Are they public material?

    • @Dimi231
      @Dimi231 3 месяца назад

      i am also curious about that!!

  • @hibamajdy9769
    @hibamajdy9769 6 месяцев назад

    I have a question how do you come up with a decision boundary in GDA?

    • @littleKingSolomon
      @littleKingSolomon 3 месяца назад

      Perhaps, the boundary is described the curve obtained by plotting p(y=1| x) for varying x.

  • @WaseemKhan-dp2hi
    @WaseemKhan-dp2hi Год назад +3

    At 27:15 he writes the optimum value of phi that maximizes the likelihood function. It shows that when there is new person comes in, all we need to find the ratio of positive cases to the total training samples. If this ratio is 30%, we will say (without investigating symptoms and without calculating x) that the new person is 30% likely to be a positive case. That looks very odd that declaring a new case positive or negative, we don't consider x (features) of the new test sample. Can someone correct me if my understanding is wrong.

    • @abhijitpai6085
      @abhijitpai6085 Год назад +2

      This situation relates to the Frequentist statistics vs Bayesian statistics discussion, read about "priors" to understand how both camps view computing proabilities.

    • @LZ-re9bm
      @LZ-re9bm Год назад +2

      Phi is the probability that he also denotes as p(y). This is by definition the probability that a new person will be positive, given no further information, so given no data. Basically, it is the best guess you have for whether a random person is positive, before you know anything about the person. It makes sense that no further information on the individual your best guess is just the ratio in the general population. He is trying to clarify this in his sentence: "What is the chance that the NEXT patient that walks into your office has a malignant tumor?". The key word is next, so this is happening in the future and you can't have any data yet on that person, but I agree it's misunderstandable.
      The probability that given the features the person is positive is denoted as p(y|x). He gives the way how to calculate this earlier in the lecture. Unsuprisingly, this does depend on the data x. So if you have features, you do consider them to make your prediction, which is p(y|x).

  • @kevinshao9148
    @kevinshao9148 7 месяцев назад

    great lecture! but anyone knows at 1:09:43 why we need 2^10000 parameters here? Thanks a lot!

    • @OK-lj5zc
      @OK-lj5zc 6 месяцев назад +2

      i think its because in the multinomial model, there would be 2^10000 possible values for x, and each value has an associated probability that x equals it. Actually, since all the probabilities add to one, we only need (2^10000 )-1

    • @kevinshao9148
      @kevinshao9148 6 месяцев назад

      @@OK-lj5zc Thank you so much for your explaining Sir! It's actually a simple concept with your explanation, really appreciate it!

  • @humanity_first48
    @humanity_first48 Год назад

    can any one help me out with this question " why we only use standard gaussian distribution"??

    • @rijrya
      @rijrya Год назад +3

      he explained it before, it's because of the central limit theorem

  • @praveengupta2271
    @praveengupta2271 6 месяцев назад

    The lecture is amazing but I want the notes or pdf, whatever discuss in this lecture

    • @OEDzn
      @OEDzn 2 месяца назад

      cs229.stanford.edu/lectures-spring2022/main_notes.pdf

  • @Tyokok
    @Tyokok 6 месяцев назад +1

    Thanks for the great lecture! One question please if I may (to anyone). Is Naive Bayes the same method as Bayesian Conditional Probability Modeling method (a special case or so) ? Or they are completely two separate modeling methods? Many Thanks in advance!

    • @TheBanananutbutton
      @TheBanananutbutton 5 месяцев назад

      25:12 might answer your question, I'm not sure

    • @Tyokok
      @Tyokok 5 месяцев назад

      @@TheBanananutbutton Really thank you for applying! And happy to discuss. No this is not. 25:12 pointed out diff between generative and discriminative (though I am not sure what those names stand for in depth). And I think I got answer vaguely that Naive Bayes and Bayesian Model are the same approach. I just need go more solid to connect them fully.

    • @AdityaByju
      @AdityaByju 3 месяца назад

      Naive Bayes can be seen as a specific application of Bayesian modeling, where we assume conditional independence of the features given the class label. However, Bayesian modeling is a more general framework that can be applied to various types of models and data, not just classification tasks with independent features.

  • @yong_sung
    @yong_sung 9 месяцев назад

    57:25

  • @alpaslankurt9394
    @alpaslankurt9394 4 месяца назад

    Are there anyone who has the lecture notes, or Can I get lecture notes to study deeply for this course ?

    • @littleKingSolomon
      @littleKingSolomon 3 месяца назад

      taking your personal notes while watching this will greatly aid your understanding imo.

    • @OEDzn
      @OEDzn 2 месяца назад

      cs229.stanford.edu/lectures-spring2022/main_notes.pdf

  • @danieljaszczyszczykoeczews2616
    @danieljaszczyszczykoeczews2616 Год назад +3

    19:17 sub ZERO

  • @AyushAgarwal-YearBTechElectron
    @AyushAgarwal-YearBTechElectron 2 года назад +14

    I have 2 questions about the classroom , umm , why does he never use the last whiteboard (the most bottom one ) , and how does the camera keep moving like that while recording ?

    • @ragibshahriar187
      @ragibshahriar187 2 года назад +129

      glad you understood everything else

    • @bibinal1216
      @bibinal1216 2 года назад +1

      @@ragibshahriar187 🤣🤣🤣🤣🤣🤣

    • @ezepheros5028
      @ezepheros5028 Год назад +4

      Idk about the first question but for the second question they probably have a camera person

    • @shivanshmishra752
      @shivanshmishra752 Год назад +6

      He never uses the last because it is fixed and once you use two moving boards and go to use last one one of them will be hidden so you will not be able so everything written lol I am answering funny questions

    • @shivanshmishra752
      @shivanshmishra752 Год назад

      @@ragibshahriar187 lol

  • @rahuramrt8051
    @rahuramrt8051 8 месяцев назад

    hello

  • @soumyadeepsarkar2119
    @soumyadeepsarkar2119 10 месяцев назад

    25:14

  • @creativeuser9086
    @creativeuser9086 Год назад +1

    watching it at x2 speed 2x is a nice hack.

  • @rahuramrt8051
    @rahuramrt8051 8 месяцев назад

    hhi

  • @Nett6799
    @Nett6799 Год назад

    Why he didn't start from crash
    Like WE know already these equations ☹️🙄

    • @anishkhatiwada2502
      @anishkhatiwada2502 Год назад +1

      we should have already developed some pre-requisite. He is teaching on stanford, and the student already has strong understanding on mathematics.

    • @CarbonSilicium
      @CarbonSilicium 7 месяцев назад +2

      It's a graduate course in a field built entirely on math. I would say, he explains much more basics than he has to. If you don't know some stuff from the first year math, you should probably start there.

  • @guywithcoolid
    @guywithcoolid 3 месяца назад

    48:45