Machine Learning Lecture 19 "Bias Variance Decomposition" -Cornell CS4780 SP17

Поделиться
HTML-код
  • Опубликовано: 6 янв 2025

Комментарии •

  • @thecactus7950
    @thecactus7950 6 лет назад +94

    Man, its such a privilege being able to watch stuff like this.

    • @Biesterable
      @Biesterable 6 лет назад +8

      So true

    • @TrentTube
      @TrentTube 5 лет назад +3

      I feel the exact same way. I am constantly humbled and thrilled this is available.

  • @filippovannella4957
    @filippovannella4957 5 лет назад +44

    This man is one of the best professor I have ever seen. Thanks a lot for this lecture series.

  • @TheCuriousCurator-Hindi
    @TheCuriousCurator-Hindi Месяц назад

    I have been studying ML for last 12 years and I endorse as a learner that this course is one of the best classical ML introductory course.

  • @tarunluthrabk
    @tarunluthrabk 4 года назад +12

    I searched extensively for good content on Machine learning, and by God's grace I found one! Thank you Prof Weinberger.

  • @ebiiseo
    @ebiiseo 5 лет назад +8

    Your ability to uncover insights behind all those mathematical formulas is superb. I really like the way you teach. Thank you for uploading this

  • @juliocardenas4485
    @juliocardenas4485 4 года назад +5

    I’m using what I’ve learned here to try improving people’s lives. I’m a data scientist in healthcare and a former radiology researcher.
    Thank you for sharing this freely.

  • @xwcao1991
    @xwcao1991 3 года назад +3

    Thank you prof. Weinberger for making educational fairness to the people from the thrid world countries like me. who can not afford to study in one of the world's class universities like Cornell. Wish you healthy and happy in your entrie life.

  • @jachawkvr
    @jachawkvr 4 года назад +7

    I was familiar with these concepts before watching this lecture, but now I feel like I actually understand what bias and variance mean. Thank you so much for explaining these so well!

  • @MohamedTarek-vt4lb
    @MohamedTarek-vt4lb Год назад +1

    This is Amazing! bless you professor Kilian if you read this

  • @vatsan16
    @vatsan16 4 года назад +35

    Me: Machine learning is a black box, the math is too abstract, and nothing really makes sense
    Professor Weinberger: Hold my beer

  • @jorgeestebanmendozaortiz873
    @jorgeestebanmendozaortiz873 3 года назад +3

    Due to the Covid crisis the professors at my University went on a strike for most of the semester, so my ML class got ruined. Fortunately I found your lectures, and I've been following through the last months. I have to say this is the most thorough introductory course to ML that I've found out there. Thank you very much, prof. Killian, for making your lectures available for everyone. You're working towards a freer and better world by doing so.

  • @kevinshen3221
    @kevinshen3221 3 года назад +1

    this is absolutely gold. was so confused by reading An Introduction to Statistical Learning because they give no explanation of how they get bias variance tradeoff, and i found this!

  • @jenishah9825
    @jenishah9825 2 года назад

    Such videos don't generally come up in YT suggestions. But if you have found it, it is a gold mine!

  • @yuniyunhaf5767
    @yuniyunhaf5767 5 лет назад +5

    i cant believe i have reached to this point, and he shaped the way i think about ML, best professor

  • @psfonseka
    @psfonseka 5 лет назад +4

    This was super helpful for my own classwork. Thank you so much for posting your lectures publicly!

  • @deltasun
    @deltasun 4 года назад +3

    that's the clearest exposition of bias-variance decomposition I've ever seen (and i've seen quite a few). by far

  • @rajeshs2840
    @rajeshs2840 5 лет назад +5

    Oh Man, Hats off to you efforts.. Its amazing lecture..

  • @vishnuvardhan6625
    @vishnuvardhan6625 10 месяцев назад +1

    Best vedio on Bias-Variance Decomposition ❤

  • @sheikhshafayat6984
    @sheikhshafayat6984 3 года назад

    I don’t usually comment anywhere. But I can’t help say a thanks to you. Such a great teaching skill!

  • @TheAIJokes
    @TheAIJokes 3 года назад

    you are one of my favourite teacher sir....love you from india....❤️

  • @muratcan__22
    @muratcan__22 5 лет назад +13

    this video is gold

  • @crystinaxinyuzhang3621
    @crystinaxinyuzhang3621 4 года назад +4

    It's such an amazing lecture! I've never thought each trained trained ml model itself as a random variable before and this is really eye opening

  • @NO_REPLY_ALARM_TOWARD_ME
    @NO_REPLY_ALARM_TOWARD_ME 2 года назад

    I think that the lecturer always give the students several minutes to make clarified themselves even it maybe seems to be trivial in proving step. It maybe seem difficult, but concise to follow. Thanks.

  • @haodongzheng7045
    @haodongzheng7045 3 года назад +1

    Thank you, professor. I feel like that I’ve grown up a little bit after watching your video ;)

  • @sans8119
    @sans8119 4 года назад +3

    An amazing lecture !! makes things very clear .

  • @mateuszjaworski2974
    @mateuszjaworski2974 Год назад

    It's like a good action movie, you can't wait enought about what will be next.

  • @angelocortez5185
    @angelocortez5185 3 года назад

    These videos popped up on my feed. Didn't realize you wrote the MLKR paper as well. Seeing your videos make me wish I took a formal class with you. Thank you for this content Kilian!

  • @taketaxisky
    @taketaxisky 4 года назад +2

    The way the error is decomposed reminds me of the decomposition of sum of squares in ANOVA into within group SS and between group SS, in a similar calculation

  • @hanseyye1468
    @hanseyye1468 3 года назад +2

    Thanks Professor Weinberger.I have one question on 23:28, why we use here the joint distribution p(x,y) but not a conditional p(yIx) or p(y)*p(x)

    • @kilianweinberger698
      @kilianweinberger698  3 года назад

      Because you are drawing x and y randomly, and your data set and algorithm depends on both. You could factor this into first drawing x, then y i.e. P(y|x)P(x), but it really wouldn't change much in the analysis. Hope this helps.

    • @hanseyye1468
      @hanseyye1468 3 года назад

      @@kilianweinberger698 thank you so much

  • @vishchugh
    @vishchugh 4 года назад +1

    BEST LECTURE ON BIAS VARIANCE TR !!!!!!!!!!!!!!!!

  • @janismednieks1277
    @janismednieks1277 3 года назад

    "My son is doing that now, he's in second grade."
    If you're the one teaching him, I believe you. Thanks.

  • @StevenSarasin
    @StevenSarasin Год назад

    That means that the noise also depends on the feature set. So that the noise is not necessarily irreducible, if you can find new features to include. In the housing price example you would appear to have a lot of noise if you left out a location variable in the feature x! Interesting. So we have reduced the generalization error to the dependency on D, the variance, will more data improve the situation? the dependency on the feature set, does there exist a feature set that limits the variance on y itself averaged given x? and there is the bias dependency, are we in principle flexible enough to match the true data pattern (linear vs non-linear.)

  • @xmtiaz
    @xmtiaz 3 года назад +1

    This was beautiful.

  • @ashraf736
    @ashraf736 2 года назад

    What a wonderful lecture.

  • @TheCuriousCurator-Hindi
    @TheCuriousCurator-Hindi 29 дней назад

    watching backwards and happy to see printer working :)

  • @noblessetech
    @noblessetech 5 лет назад +1

    Awesome video playlist love it.

  • @abhyudayasrinet17
    @abhyudayasrinet17 5 лет назад +1

    A really great explanation

  • @VijayBhaskarSingh
    @VijayBhaskarSingh 2 года назад

    {x1, x2, .. xi} are sample vectors from (X variables)? Or they are function of one variable?

  • @gauravsinghtanwar4415
    @gauravsinghtanwar4415 4 года назад

    What is the need to take probability term in the expected test error expression ?

  • @florianwicher
    @florianwicher 4 года назад +1

    It was a little bit slow, but I got it now. Thanks a lot!

  • @vocabularybytesbypriyankgo1558
    @vocabularybytesbypriyankgo1558 3 месяца назад

    Thank you so much Sir !!!

  • @jordankuzmanovik5297
    @jordankuzmanovik5297 4 года назад +1

    Wonderful!!...Bravo

  • @danielsiemmeister5286
    @danielsiemmeister5286 3 года назад

    First of all thank you of this very intuitive explanation, Mr. Weinberger!
    I have some small questions or remarks which aren't 100 % clear to me:
    - you said, that y (given x) is random. So we want to pick one statistic depending on our goals. In this case you choose the Expectation(y|x). (One could for example choose the median, coudn't we?) However some minutes later you choose the squared loss function as a "nice" choice for regression. Aren't this two sides of the same coin? If I am choosing the squared loss function, then I am picking the E[y|x]? (When I am choosing the absolute value loss function, then I am choosing the median). So this is my first question, are my thoughts right?
    - How would the proof look like if I am not in the "squared loss / expectation" setting? What would the proof look like for an generic loss function or statistic of y|x? This is my second question.
    - How would the proof look like if we are going in the regression setting? I think that is pretty much the same question as question 2. Am I right, when I am saying, that if the distribution of y|x is discrete, than I am in a classification setting and if it is continous, than I am in a regression setting? Furthermore, if I am picking the statistic of y|x (or a loss function) in a generic way, then I have a proof for classification and regression problems?
    I would be very thankful if anyone could answer or comment on my questions!
    Yours Daniel

    • @kilianweinberger698
      @kilianweinberger698  3 года назад +2

      Yes, you are right. The math becomes a lot trickier if you don't use the squared loss, but ultimately the principle is the same for pretty much any less function.

  • @amit_muses
    @amit_muses 4 года назад

    I have a command on Bayesian probability theorem, total probability theorem, but couldn't understand the symbols the prof used. I could understand that the prof used some concepts of Expectation theory but couldn't understand well. Can someone suggest some material for this part that I can do in a very short period so that I can understand this lecture well

  • @ayushmalik7093
    @ayushmalik7093 2 года назад

    Hi Prof
    High Variance implies overfitting but overfitting has 2 parts, high test error and low training error. how to imply low training error from high variance? High variance in the hd(x) could also be result of gibberish learning by our algo which could leads to high test and training error. IMO low bias and high variance should mean overfitting as in that case model prediction for different datasets will spread around the centre of your dart board.

  • @siddhanttandon6246
    @siddhanttandon6246 3 года назад

    Hey Prof i have a question. In this derivation we kinda bounded the risk for a new sample i.e. out of sample risk which is composed of 3 parts. Is their some theory which does the same breakdown of risk on our training set i.e. samples the model has already seen ? I am particularly interested to know if my training loss can ever go to zero.

    • @kilianweinberger698
      @kilianweinberger698  3 года назад +1

      That depends on your hypothesis class (i.e. what algorithm you are using). Maybe take a look at the lectures on Boosting. AdaBoost is an ensemble algorithm that (given some assumptions) guarantees that the training error will go to zero (if you average several classifiers together).

  • @adiratna96
    @adiratna96 3 года назад +1

    I didn't understand why D and (x,y ) are independent. Anyone can explain why? please. TIA.

    • @adiratna96
      @adiratna96 3 года назад +1

      Damn, never mind I got it.

  • @pendekantimaheshbabu9799
    @pendekantimaheshbabu9799 4 года назад

    Excellent Can we apply bv trade off among different models?ie for e.g. between Linear Regression and polynomial regression comparison?Whether Bold H consists of set of Hypotheses that contain only linear regressors?

    • @kilianweinberger698
      @kilianweinberger698  4 года назад +1

      Ultimately the BV trade-off exists for all models. However, as far as I know the derivation of this decomposition only falls into place so nicely in a few steps for linear regression.

  • @immabreakaleg
    @immabreakaleg 4 года назад +2

    17:48 what a boss question wow

  • @ammarkhan2611
    @ammarkhan2611 4 года назад

    Hi Professor,
    Is there a way to get an access to the assignments ?

  • @macc7374
    @macc7374 3 года назад

    Hi Professor! Thank you for uploading this video. When we start the derivation by representing expected test error in terms of hD(X) and y, how can we explain the presence of noise? Our assumption is that y is the correct label. So while there is certainly noise in real world examples, given the starting point of the derivation here, should noise be expected to show up?

    • @kilianweinberger698
      @kilianweinberger698  3 года назад +4

      Keep in mind noise can either be a bad measurement, but it could also be part of the label that you just cannot explain by your representation of x. Imagine I am predicting house prices (y) based on features about a house (x). My features are e.g. number of bedrooms, square footage, age, ... But now the price of a house decreases because a really loud and rambunctious fraternity moves in next door - something that is not captured in my x at all. For this house the price y is now abnormally low. The price is correct, but given your limited features the only way you can explain it is as noise.

    • @macc7374
      @macc7374 3 года назад

      @@kilianweinberger698 thank you

  • @sandeshhegde9143
    @sandeshhegde9143 5 лет назад +2

    Where is lecture 18? (I don't see it in the playlist)

    • @Saganist420
      @Saganist420 5 лет назад +5

      lecture 18 was an exam, so it was not recorded.

    • @TrentTube
      @TrentTube 5 лет назад

      I eventually concluded it was the exam I skipped :D

  • @bharatbajoria
    @bharatbajoria 4 года назад

    Why is there no D at 37:00 in b^2?

    • @kilianweinberger698
      @kilianweinberger698  4 года назад +2

      both terms y-bar and y are independent of the training data set D.

  • @taketaxisky
    @taketaxisky 4 года назад

    How does overfitting affect the decomposed error terms? Maybe it is not relevant here.

    • @taketaxisky
      @taketaxisky 4 года назад +1

      Just realize a graph in the lecture notes explains this!

  • @roniswar
    @roniswar 3 года назад

    Dear Prof', Thank you again for posting this, very useful and interesting!! One question: in a regression setup, why do you call h (the hypothesis function) as "expected classifier"? Is this the common definition, when thinking about regression problem? Thanks!

    • @kilianweinberger698
      @kilianweinberger698  3 года назад +2

      No, it is only in the setting where you consider the training set as a random variable. Under this view, the classifier also becomes a random variable (as it is a function of the training set), and you can in theory compute its expectation. Hope this helps.

    • @roniswar
      @roniswar 3 года назад

      @@kilianweinberger698 Thank you! One other thing that I didn't see that anyone asked. What happens to the The bias-variance tradeoff, that you fully showed for MSE, when the loss function is not an MSE? Is the decomposition still contains exactly those 3 quantities of bias-variance-noise? How do we measure the tradeoff in these case? We do not longer have this convex parabola shape I assume (If you have a good source explaining this issue please refer me to it).

  • @meenakshisundaram8310
    @meenakshisundaram8310 3 года назад

    Thank you very much

  • @utkarshtrehan9128
    @utkarshtrehan9128 4 года назад

    Enlightenment

  • @lorenzoappino9158
    @lorenzoappino9158 3 года назад

    Killian is my hero

  • @logicboard7746
    @logicboard7746 2 года назад

    Point @22:00

  • @Saganist420
    @Saganist420 5 лет назад +5

    My real life dart playing skills have high bias, high variance.

  • @gaconc1
    @gaconc1 3 года назад +1

    This is a form of the Pythagorean theorem

  • @kc1299
    @kc1299 4 года назад

    disappears into some good feeling hahhaa

  • @deepfakevasmoy3477
    @deepfakevasmoy3477 4 года назад +1

    24:56 please someone ask some question, I am not ready for war :)

  • @hohinng8644
    @hohinng8644 2 года назад

    Everything is excellent except the poor handwritting