I’m using what I’ve learned here to try improving people’s lives. I’m a data scientist in healthcare and a former radiology researcher. Thank you for sharing this freely.
Thank you prof. Weinberger for making educational fairness to the people from the thrid world countries like me. who can not afford to study in one of the world's class universities like Cornell. Wish you healthy and happy in your entrie life.
I was familiar with these concepts before watching this lecture, but now I feel like I actually understand what bias and variance mean. Thank you so much for explaining these so well!
Due to the Covid crisis the professors at my University went on a strike for most of the semester, so my ML class got ruined. Fortunately I found your lectures, and I've been following through the last months. I have to say this is the most thorough introductory course to ML that I've found out there. Thank you very much, prof. Killian, for making your lectures available for everyone. You're working towards a freer and better world by doing so.
this is absolutely gold. was so confused by reading An Introduction to Statistical Learning because they give no explanation of how they get bias variance tradeoff, and i found this!
I think that the lecturer always give the students several minutes to make clarified themselves even it maybe seems to be trivial in proving step. It maybe seem difficult, but concise to follow. Thanks.
These videos popped up on my feed. Didn't realize you wrote the MLKR paper as well. Seeing your videos make me wish I took a formal class with you. Thank you for this content Kilian!
The way the error is decomposed reminds me of the decomposition of sum of squares in ANOVA into within group SS and between group SS, in a similar calculation
Because you are drawing x and y randomly, and your data set and algorithm depends on both. You could factor this into first drawing x, then y i.e. P(y|x)P(x), but it really wouldn't change much in the analysis. Hope this helps.
That means that the noise also depends on the feature set. So that the noise is not necessarily irreducible, if you can find new features to include. In the housing price example you would appear to have a lot of noise if you left out a location variable in the feature x! Interesting. So we have reduced the generalization error to the dependency on D, the variance, will more data improve the situation? the dependency on the feature set, does there exist a feature set that limits the variance on y itself averaged given x? and there is the bias dependency, are we in principle flexible enough to match the true data pattern (linear vs non-linear.)
First of all thank you of this very intuitive explanation, Mr. Weinberger! I have some small questions or remarks which aren't 100 % clear to me: - you said, that y (given x) is random. So we want to pick one statistic depending on our goals. In this case you choose the Expectation(y|x). (One could for example choose the median, coudn't we?) However some minutes later you choose the squared loss function as a "nice" choice for regression. Aren't this two sides of the same coin? If I am choosing the squared loss function, then I am picking the E[y|x]? (When I am choosing the absolute value loss function, then I am choosing the median). So this is my first question, are my thoughts right? - How would the proof look like if I am not in the "squared loss / expectation" setting? What would the proof look like for an generic loss function or statistic of y|x? This is my second question. - How would the proof look like if we are going in the regression setting? I think that is pretty much the same question as question 2. Am I right, when I am saying, that if the distribution of y|x is discrete, than I am in a classification setting and if it is continous, than I am in a regression setting? Furthermore, if I am picking the statistic of y|x (or a loss function) in a generic way, then I have a proof for classification and regression problems? I would be very thankful if anyone could answer or comment on my questions! Yours Daniel
Yes, you are right. The math becomes a lot trickier if you don't use the squared loss, but ultimately the principle is the same for pretty much any less function.
I have a command on Bayesian probability theorem, total probability theorem, but couldn't understand the symbols the prof used. I could understand that the prof used some concepts of Expectation theory but couldn't understand well. Can someone suggest some material for this part that I can do in a very short period so that I can understand this lecture well
Hi Prof High Variance implies overfitting but overfitting has 2 parts, high test error and low training error. how to imply low training error from high variance? High variance in the hd(x) could also be result of gibberish learning by our algo which could leads to high test and training error. IMO low bias and high variance should mean overfitting as in that case model prediction for different datasets will spread around the centre of your dart board.
Hey Prof i have a question. In this derivation we kinda bounded the risk for a new sample i.e. out of sample risk which is composed of 3 parts. Is their some theory which does the same breakdown of risk on our training set i.e. samples the model has already seen ? I am particularly interested to know if my training loss can ever go to zero.
That depends on your hypothesis class (i.e. what algorithm you are using). Maybe take a look at the lectures on Boosting. AdaBoost is an ensemble algorithm that (given some assumptions) guarantees that the training error will go to zero (if you average several classifiers together).
Excellent Can we apply bv trade off among different models?ie for e.g. between Linear Regression and polynomial regression comparison?Whether Bold H consists of set of Hypotheses that contain only linear regressors?
Ultimately the BV trade-off exists for all models. However, as far as I know the derivation of this decomposition only falls into place so nicely in a few steps for linear regression.
Hi Professor! Thank you for uploading this video. When we start the derivation by representing expected test error in terms of hD(X) and y, how can we explain the presence of noise? Our assumption is that y is the correct label. So while there is certainly noise in real world examples, given the starting point of the derivation here, should noise be expected to show up?
Keep in mind noise can either be a bad measurement, but it could also be part of the label that you just cannot explain by your representation of x. Imagine I am predicting house prices (y) based on features about a house (x). My features are e.g. number of bedrooms, square footage, age, ... But now the price of a house decreases because a really loud and rambunctious fraternity moves in next door - something that is not captured in my x at all. For this house the price y is now abnormally low. The price is correct, but given your limited features the only way you can explain it is as noise.
Dear Prof', Thank you again for posting this, very useful and interesting!! One question: in a regression setup, why do you call h (the hypothesis function) as "expected classifier"? Is this the common definition, when thinking about regression problem? Thanks!
No, it is only in the setting where you consider the training set as a random variable. Under this view, the classifier also becomes a random variable (as it is a function of the training set), and you can in theory compute its expectation. Hope this helps.
@@kilianweinberger698 Thank you! One other thing that I didn't see that anyone asked. What happens to the The bias-variance tradeoff, that you fully showed for MSE, when the loss function is not an MSE? Is the decomposition still contains exactly those 3 quantities of bias-variance-noise? How do we measure the tradeoff in these case? We do not longer have this convex parabola shape I assume (If you have a good source explaining this issue please refer me to it).
Man, its such a privilege being able to watch stuff like this.
So true
I feel the exact same way. I am constantly humbled and thrilled this is available.
This man is one of the best professor I have ever seen. Thanks a lot for this lecture series.
I have been studying ML for last 12 years and I endorse as a learner that this course is one of the best classical ML introductory course.
I searched extensively for good content on Machine learning, and by God's grace I found one! Thank you Prof Weinberger.
Your ability to uncover insights behind all those mathematical formulas is superb. I really like the way you teach. Thank you for uploading this
I’m using what I’ve learned here to try improving people’s lives. I’m a data scientist in healthcare and a former radiology researcher.
Thank you for sharing this freely.
Thank you prof. Weinberger for making educational fairness to the people from the thrid world countries like me. who can not afford to study in one of the world's class universities like Cornell. Wish you healthy and happy in your entrie life.
I was familiar with these concepts before watching this lecture, but now I feel like I actually understand what bias and variance mean. Thank you so much for explaining these so well!
This is Amazing! bless you professor Kilian if you read this
Me: Machine learning is a black box, the math is too abstract, and nothing really makes sense
Professor Weinberger: Hold my beer
Due to the Covid crisis the professors at my University went on a strike for most of the semester, so my ML class got ruined. Fortunately I found your lectures, and I've been following through the last months. I have to say this is the most thorough introductory course to ML that I've found out there. Thank you very much, prof. Killian, for making your lectures available for everyone. You're working towards a freer and better world by doing so.
this is absolutely gold. was so confused by reading An Introduction to Statistical Learning because they give no explanation of how they get bias variance tradeoff, and i found this!
Such videos don't generally come up in YT suggestions. But if you have found it, it is a gold mine!
i cant believe i have reached to this point, and he shaped the way i think about ML, best professor
This was super helpful for my own classwork. Thank you so much for posting your lectures publicly!
that's the clearest exposition of bias-variance decomposition I've ever seen (and i've seen quite a few). by far
Oh Man, Hats off to you efforts.. Its amazing lecture..
Best vedio on Bias-Variance Decomposition ❤
I don’t usually comment anywhere. But I can’t help say a thanks to you. Such a great teaching skill!
you are one of my favourite teacher sir....love you from india....❤️
this video is gold
It's such an amazing lecture! I've never thought each trained trained ml model itself as a random variable before and this is really eye opening
I think that the lecturer always give the students several minutes to make clarified themselves even it maybe seems to be trivial in proving step. It maybe seem difficult, but concise to follow. Thanks.
Thank you, professor. I feel like that I’ve grown up a little bit after watching your video ;)
An amazing lecture !! makes things very clear .
It's like a good action movie, you can't wait enought about what will be next.
These videos popped up on my feed. Didn't realize you wrote the MLKR paper as well. Seeing your videos make me wish I took a formal class with you. Thank you for this content Kilian!
The way the error is decomposed reminds me of the decomposition of sum of squares in ANOVA into within group SS and between group SS, in a similar calculation
Thanks Professor Weinberger.I have one question on 23:28, why we use here the joint distribution p(x,y) but not a conditional p(yIx) or p(y)*p(x)
Because you are drawing x and y randomly, and your data set and algorithm depends on both. You could factor this into first drawing x, then y i.e. P(y|x)P(x), but it really wouldn't change much in the analysis. Hope this helps.
@@kilianweinberger698 thank you so much
BEST LECTURE ON BIAS VARIANCE TR !!!!!!!!!!!!!!!!
"My son is doing that now, he's in second grade."
If you're the one teaching him, I believe you. Thanks.
That means that the noise also depends on the feature set. So that the noise is not necessarily irreducible, if you can find new features to include. In the housing price example you would appear to have a lot of noise if you left out a location variable in the feature x! Interesting. So we have reduced the generalization error to the dependency on D, the variance, will more data improve the situation? the dependency on the feature set, does there exist a feature set that limits the variance on y itself averaged given x? and there is the bias dependency, are we in principle flexible enough to match the true data pattern (linear vs non-linear.)
This was beautiful.
What a wonderful lecture.
watching backwards and happy to see printer working :)
Awesome video playlist love it.
A really great explanation
{x1, x2, .. xi} are sample vectors from (X variables)? Or they are function of one variable?
What is the need to take probability term in the expected test error expression ?
It was a little bit slow, but I got it now. Thanks a lot!
Thank you so much Sir !!!
Wonderful!!...Bravo
First of all thank you of this very intuitive explanation, Mr. Weinberger!
I have some small questions or remarks which aren't 100 % clear to me:
- you said, that y (given x) is random. So we want to pick one statistic depending on our goals. In this case you choose the Expectation(y|x). (One could for example choose the median, coudn't we?) However some minutes later you choose the squared loss function as a "nice" choice for regression. Aren't this two sides of the same coin? If I am choosing the squared loss function, then I am picking the E[y|x]? (When I am choosing the absolute value loss function, then I am choosing the median). So this is my first question, are my thoughts right?
- How would the proof look like if I am not in the "squared loss / expectation" setting? What would the proof look like for an generic loss function or statistic of y|x? This is my second question.
- How would the proof look like if we are going in the regression setting? I think that is pretty much the same question as question 2. Am I right, when I am saying, that if the distribution of y|x is discrete, than I am in a classification setting and if it is continous, than I am in a regression setting? Furthermore, if I am picking the statistic of y|x (or a loss function) in a generic way, then I have a proof for classification and regression problems?
I would be very thankful if anyone could answer or comment on my questions!
Yours Daniel
Yes, you are right. The math becomes a lot trickier if you don't use the squared loss, but ultimately the principle is the same for pretty much any less function.
I have a command on Bayesian probability theorem, total probability theorem, but couldn't understand the symbols the prof used. I could understand that the prof used some concepts of Expectation theory but couldn't understand well. Can someone suggest some material for this part that I can do in a very short period so that I can understand this lecture well
Hi Prof
High Variance implies overfitting but overfitting has 2 parts, high test error and low training error. how to imply low training error from high variance? High variance in the hd(x) could also be result of gibberish learning by our algo which could leads to high test and training error. IMO low bias and high variance should mean overfitting as in that case model prediction for different datasets will spread around the centre of your dart board.
Hey Prof i have a question. In this derivation we kinda bounded the risk for a new sample i.e. out of sample risk which is composed of 3 parts. Is their some theory which does the same breakdown of risk on our training set i.e. samples the model has already seen ? I am particularly interested to know if my training loss can ever go to zero.
That depends on your hypothesis class (i.e. what algorithm you are using). Maybe take a look at the lectures on Boosting. AdaBoost is an ensemble algorithm that (given some assumptions) guarantees that the training error will go to zero (if you average several classifiers together).
I didn't understand why D and (x,y ) are independent. Anyone can explain why? please. TIA.
Damn, never mind I got it.
Excellent Can we apply bv trade off among different models?ie for e.g. between Linear Regression and polynomial regression comparison?Whether Bold H consists of set of Hypotheses that contain only linear regressors?
Ultimately the BV trade-off exists for all models. However, as far as I know the derivation of this decomposition only falls into place so nicely in a few steps for linear regression.
17:48 what a boss question wow
Hi Professor,
Is there a way to get an access to the assignments ?
Hi Professor! Thank you for uploading this video. When we start the derivation by representing expected test error in terms of hD(X) and y, how can we explain the presence of noise? Our assumption is that y is the correct label. So while there is certainly noise in real world examples, given the starting point of the derivation here, should noise be expected to show up?
Keep in mind noise can either be a bad measurement, but it could also be part of the label that you just cannot explain by your representation of x. Imagine I am predicting house prices (y) based on features about a house (x). My features are e.g. number of bedrooms, square footage, age, ... But now the price of a house decreases because a really loud and rambunctious fraternity moves in next door - something that is not captured in my x at all. For this house the price y is now abnormally low. The price is correct, but given your limited features the only way you can explain it is as noise.
@@kilianweinberger698 thank you
Where is lecture 18? (I don't see it in the playlist)
lecture 18 was an exam, so it was not recorded.
I eventually concluded it was the exam I skipped :D
Why is there no D at 37:00 in b^2?
both terms y-bar and y are independent of the training data set D.
How does overfitting affect the decomposed error terms? Maybe it is not relevant here.
Just realize a graph in the lecture notes explains this!
Dear Prof', Thank you again for posting this, very useful and interesting!! One question: in a regression setup, why do you call h (the hypothesis function) as "expected classifier"? Is this the common definition, when thinking about regression problem? Thanks!
No, it is only in the setting where you consider the training set as a random variable. Under this view, the classifier also becomes a random variable (as it is a function of the training set), and you can in theory compute its expectation. Hope this helps.
@@kilianweinberger698 Thank you! One other thing that I didn't see that anyone asked. What happens to the The bias-variance tradeoff, that you fully showed for MSE, when the loss function is not an MSE? Is the decomposition still contains exactly those 3 quantities of bias-variance-noise? How do we measure the tradeoff in these case? We do not longer have this convex parabola shape I assume (If you have a good source explaining this issue please refer me to it).
Thank you very much
Enlightenment
Killian is my hero
Point @22:00
Then @41:00
My real life dart playing skills have high bias, high variance.
This is a form of the Pythagorean theorem
Interesting observation!
disappears into some good feeling hahhaa
24:56 please someone ask some question, I am not ready for war :)
Everything is excellent except the poor handwritting