Good turorial. My thoughts below (hope it adds to someone's understanding): We perform cross validation (to make sure that model has good accuracy rate and it can be used for prediction using unseen/new or test data). To do so, we use train and test data by properly splitting our dataset for example 80% for training, 20% for testing the model. This can be performed using train_test, train_test_split or K-fold (K-fold is mostly used to avoid under and overfiting problems). A model is considered as a good model when it gives high accuracy using training as well as testing data. Good accuracy on test data means, model will have good accuracy when it is trying to make predictions on new or unseen data for example, using the data which is not included in the training set. Good accuracy also means that the value predicted by the model will be very much close to the actual value. Bias will be low and variance will be high when model performs well on the training data but performs bad or poorly on the test data. High variance means the model cannot generalize to new or unseen data. (This is the case of overfiting) If the model performs poorly (means less accurate and cannot generalize) on both training data and test data, it means it has high bias and high variance. (This is the case of underfiting) If model performs well on both test and training data. Performs well meaning, predictions are close to actual values for unseens data so accuracy will be high. In this case, bias will be low and variance will also be low. The best model must have low bias (low error rate on training data) and low variance (can generalize and has low error rate on new or test data). (This is the case for best fit model) so always have low bias and low variance for your models.
This video need to be watched again and again.Machine learning is nothing but proper understanding of ovrfitting and underfitting..Watching the second time.Thanks Krish
This is what they asked me in OLA interview. And the interviewer covered great depth on this topic only. It's pretty fundamental to ML. Sad to report they rejected me though.
XGBoost, the answer cant be simple, but what happens is when dealing with high bias, do better feature engineering n decrease regularization, so in XGBoost we increase depth of each tree and other techniques to handle it to minimize the loss...so you can come to conclusion that if proper parameters are defined (including regularization etc) it ll yield low bias and low variance
I have been trying to understand this concept since long ... But never knew its this simple 😀 thank u Krish for this amazingly simple explanation to understand.
One video all clear content... thanks bro it was really a nice session.. u really belong to low bias n low variance human. Keep posting such clear ML videos..
As per my understanding, variance does not actually mean the test error, but the change in test error when the test data is modified. Bcoz in underfitting, the model is too much generalized so that even if we change the test data greatly also, we moreover get the same test error. Somebody correct me if I'm wrong.
Very thorough and good explanation! Thank you. Side note: Would like to point out that 2:12 the degree of polynomial is still 2 (its still a quadratic function).
At 06:08 it is said that the underfitted data, the model has high bias and high variability. To my understanding, the information is not correct. Variance is the complexity of a model that can capture the internal distribution of the data points in the training set. When variance is high, the model will be fitted to most (even all) of the traiining data points. It will result in high training accruacy and low test accuracy. So in summary : When the model is overfitted : Low bias and high variance When the model is underfitted :High bias and Low variance Bias : The INABILITY of the model to be fit on the training data Variance : The complexity of the model which helps the model to fit with the training data.
Excellent Explanation.. Krish , in the same video you example of XG boost i.e it model learns from the previous DT and implement the same subsequently.
This video is great but one thing i want to correct , bias and variance works in inversely proportional manner like if we got high variance , bias will be low or High bias than variance will be low. So in Overfitting its High variance/Low Bias and in Underfitting High Bias/Low variance. In order to be best it should be low biased/low variance
You explained it so well sir. I was struglling with these terms but after watching your video my concept about bias, variance, underfitting and overfitting is crystal clear. Thank you!
Underfitting : High Bias and Low Variance OverFitting : Low Bias and High Variance and Generalized Model : Low Bias & Low Variance. Bias : Error from Training Data Variance : Error from Testing Data @Krish Please confirm
@@videoinfluencers3415 I mean under fitting model has low accuracy on Testing and Training Data and the difference between the Training accuracy and test accuracy is very less, that's why we get low variance and high biased in Under fitting models.
If it makes it any clear for other learners, here's my explanation... BIAS is the simplifying assumptions made by a model to make the target function (the underlying function that the ML model is trying to learn) easier to learn. VARIANCE refers to the changes to the estimate of the target function that occur if the dataset is changed when implementing the model. Considering the linear model in the example, it makes an assumption that the input and output are related linearly causing the target function to underfit and hence giving HIGH BIAS ERROR. But the same model when used with similar test data, will give quite similar results and hence giving LOW VARIANCE ERROR. I hope this clears the doubt.
On the last graph you show, Error vs Degree Of Polynomials, you mixed the curves. The red one is for the training dataset whereas the blue is for the test dataset.
Good turorial. My thoughts below (hope it adds to someone's understanding):
We perform cross validation (to make sure that model has good accuracy rate and it can be used for prediction using unseen/new or test data). To do so, we use train and test data by properly splitting our dataset for example 80% for training, 20% for testing the model. This can be performed using train_test, train_test_split or K-fold (K-fold is mostly used to avoid under and overfiting problems).
A model is considered as a good model when it gives high accuracy using training as well as testing data. Good accuracy on test data means, model will have good accuracy when it is trying to make predictions on new or unseen data for example, using the data which is not included in the training set.
Good accuracy also means that the value predicted by the model will be very much close to the actual value.
Bias will be low and variance will be high when model performs well on the training data but performs bad or poorly on the test data. High variance means the model cannot generalize to new or unseen data. (This is the case of overfiting)
If the model performs poorly (means less accurate and cannot generalize) on both training data and test data, it means it has high bias and high variance. (This is the case of underfiting)
If model performs well on both test and training data. Performs well meaning, predictions are close to actual values for unseens data so accuracy will be high. In this case, bias will be low and variance will also be low.
The best model must have low bias (low error rate on training data) and low variance (can generalize and has low error rate on new or test data).
(This is the case for best fit model) so always have low bias and low variance for your models.
Wonderful summary!
You should probably create articles coz you are good at summarising concepts!
If you have one please do share!
Great
Very well written 👍🏻
Thanks for sharing
👍🏻 Consider writing blogs
Really very nice and well written. After watching video, if we go through your summery, its a stamp on our brains. Thanks to both for your efforts.
This video need to be watched again and again.Machine learning is nothing but proper understanding of ovrfitting and underfitting..Watching the second time.Thanks Krish
Ageeed!
This is what they asked me in OLA interview. And the interviewer covered great depth on this topic only. It's pretty fundamental to ML. Sad to report they rejected me though.
@@batman9937 hi man plz help to know what other questions they asked .
@@ashishbomble8547 buy the book :: ace the data science interview by Kevin Huo and nick singh .
bhai, tu bahot sahi hai, 2.80 lacs fees bharke jo baat nahi samzi easily wo tumne 16 minutes me bata di..kudos..amazing word dear, all the very best
at 6:10 you made it all clear to me in just 2 lines!! Thank you for this video :)
Hi Krish,thanks for the explanation ..6:02 it should be high bias and low variance in case of under fitting
Yes exactly i was looking for this comment
Amazing video by Krish. Thanks for pointing out this. @Krish Naik please make a note of this
yess!!!
yess
Exactly! I searched for this comment :)
This was my biggest doubt and you clarified it in so easy terms. Thank you so much Krish.
XGBoost, the answer cant be simple, but what happens is when dealing with high bias, do better feature engineering n decrease regularization, so in XGBoost we increase depth of each tree and other techniques to handle it to minimize the loss...so you can come to conclusion that if proper parameters are defined (including regularization etc) it ll yield low bias and low variance
Can't express my gratitude enough ! Thank you for explaining it so well
Krish, your videos hit the nail on the head. You explained the meaning of bias and variance. Thanks a lot!
I have been trying to understand this concept since long ... But never knew its this simple 😀 thank u Krish for this amazingly simple explanation to understand.
You can't get a clearer explanation than this, hats off mate
One video all clear content... thanks bro it was really a nice session.. u really belong to low bias n low variance human. Keep posting such clear ML videos..
Please note that Underfitting occurs when we have HIGH BIAS and LOW VARIANCE.... except that error this video is an excellent one. Thanks.
In underfitting, model performs poor on test data as well then why it has low variance. If variance = test error?
As per my understanding, variance does not actually mean the test error, but the change in test error when the test data is modified. Bcoz in underfitting, the model is too much generalized so that even if we change the test data greatly also, we moreover get the same test error. Somebody correct me if I'm wrong.
sir after watching this video , mera confusion ek baar mein clear ho gya between bias and variance , awsome explaination
excellent tutorial. better than IIT professors who r teaching machine learning.
Krish thankyou so much., this is the best channel for data science that I ever seen. Great efforts Krish. Thanks again.
What an excellent explanation on bias and variance. I finally understood both terms. Thank you so much for the video and keep up the good work!
providing these info makes you a great teacher... the way you explain everything going to brain.....
Beautifully explained. My concept are now clear on Over fitting and Under fitting models. 👍 Thanks 🍻
One of the best explanations of Bias and Varianace w.r.t Overitting and underfitting...
krish sir i hope God bless you with whole heart you are doing great job and thanks for the INEURON it made my life easy.
Way of explanation is woww.
The most clear and precise information 🎉 thank you sir❤
This is an awesome video - was fully confused earlier - this video made it all clear !! Thanks a lot sir !!
tbh, best video on youtube about Bias And Variance.
Krish, you are a master in statistics and machine learning
After watching this video doubt is clear really helping this. And Thanks given ur precious time...
Thank you very much sir fir your clear explaination on bias variance underftting and over fitting on many parameters
U are Reallly great sir ... ur explanation is very much Crystal Clear
Very succinct explanation of the very fundamental ML concept. Thank you for the video!
Very thorough and good explanation! Thank you.
Side note: Would like to point out that 2:12 the degree of polynomial is still 2 (its still a quadratic function).
It was really good video and it clears all the doubts I have.
Very important discussion on important words in ML. Thanks. Easy explanation on hard words.
woow awesome, great work done in one single video. insightful
The best explanation among the whole youtube channels 👏. I love the way how you always keep things simple. Glad to find out about your channel, sir.
XGBoost has the property of low bias and high variance, however it can be regularised and turned into low bias and low variance. Useful video indeed.
At 06:08 it is said that the underfitted data, the model has high bias and high variability. To my understanding, the information is not correct.
Variance is the complexity of a model that can capture the internal distribution of the data points in the training set. When variance is high, the model will be fitted to most (even all) of the traiining data points. It will result in high training accruacy and low test accuracy.
So in summary :
When the model is overfitted : Low bias and high variance
When the model is underfitted :High bias and Low variance
Bias : The INABILITY of the model to be fit on the training data
Variance : The complexity of the model which helps the model to fit with the training data.
yes bro, you are correct
I also have same doubt. @Krish Naik sir , please have a look on it.
But under fitting suppose to have low accuracy of training data know ? Confusing !!
Have I learned the wrong definition of bias and variance by krish sir's explanation? Now I am confused😑
@prachi... not at all concept is at the end same
I really love his in-depth intuition videos ... compared to his plethora of videos!
Very good video, easiest video for understanding logic of bias & variance.
very useful lecture , it helps me much to understand this topic in a simple and easy way please keep going
Thanks Krish, had scourged the net, but this understanding was great. Good memory hook! Thanks for this.
Great I learnt by watching your entire playlist.
Brilliantly explained !! Thank you !!
Bias is an error on training data ,
variance is an error on test data. Thanks for simplifying
Thank you very much for the simple and proper explanation...
Very good. Revised my concepts perfectly 🔥🔥
You make one of the best tech videos on youtube !!!!
My god Krish. This was the most confusing thing for me. And you cleared it so well.
ultimate discussion and person who discussed
Today, I got clarity about this Topic, Tq u sir
Bias is in training data set and variance is in testing dataset - this line costed me linkedin machine learning job
You nailed it man ! Great work ! Respect your time and effort!
Best Explanation on Bias and Variance!
Insanely good video. Also this has amazing energy!
bestttt ...sir please make videos like this means in board....its better to understand this way
I really was in great need of such an excellent explanation of Bias and variance. great help!
Superbbb explained..it connected my dots. Thank u
Beautifully explained.
But in underfitting, model shows High Bias and Low variance instead of high variance.
Yes u r right...made a minor mistake
@@krishnaik06 But then sir you said Bias is error and in underfitting training data error is low.. so should it be low bias?
@@namansinghal3685 when data has high bias, it misses out on certain observations.. So the model will be underfit..
@@namansinghal3685 in case of underfitting training error is high..not low
@@krishnaik06 You should pin this comment
Krish is best fit teacher!
very good explanation
excellent lectures, Krish. Great
Excellent Explanation.. Krish , in the same video you example of XG boost i.e it model learns from the previous DT and implement the same subsequently.
This video is great but one thing i want to correct , bias and variance works in inversely proportional manner like if we got high variance , bias will be low or High bias than variance will be low. So in Overfitting its High variance/Low Bias and in Underfitting High Bias/Low variance.
In order to be best it should be low biased/low variance
Please give a video on some mathematical terminology like gradient descent etc. You are really doing a great job.
Sir superb explanation 🙏🙏
2:30 - underfitting and overfitting
6:10 - Bias variance
6:00 Small correction in your video.
Underfitting - High Bias & Low Variance
Overfitting - Low Bias & High Variance
This guy is really great...Thank you so much for effort you put for us.
Thanks Mr. Krish for your best explanation, now I can clearly understand about Bias and Variance :D
Mam even though I am studying AI in my clg probably this is easy to understand thanks man..
Well articulated, thank you Krish
GREAT SIR I GOT IT, THANKS FOR YOUR EFFORT.
your are so awesome, I love your teaching
you made my work easy by this explanation. thanks.
brilliant video!!!!! explained everything to the point.
very niceee, filled the gap in my knowledge
Excellent teaching
Clear explanation. @krish sir thanks for making this video
well-done sir ....keep it up
You explained it so well sir. I was struglling with these terms but after watching your video my concept about bias, variance, underfitting and overfitting is crystal clear. Thank you!
@6:13 Under fitting will be high bias and low variance.
for underfitting the condition will be high bias and low variance which is mentioned as high bias and high variance in this video
Underfitting : High Bias and Low Variance
OverFitting : Low Bias and High Variance
and Generalized Model : Low Bias & Low Variance.
Bias : Error from Training Data
Variance : Error from Testing Data
@Krish Please confirm
I am confused ...
It means that underfitted model has high accuracy on testing data?
Underfitting : High Bias and HIGH Variance
@@videoinfluencers3415 I mean under fitting model has low accuracy on Testing and Training Data and the difference between the Training accuracy and test accuracy is very less, that's why we get low variance and high biased in Under fitting models.
You are correct bro I checked on Wikipedia also..and in some different sources too.
@Krish Please Confirm.
If it makes it any clear for other learners, here's my explanation...
BIAS is the simplifying assumptions made by a model to make the target function (the underlying function that the ML model is trying to learn) easier to learn.
VARIANCE refers to the changes to the estimate of the target function that occur if the dataset is changed when implementing the model.
Considering the linear model in the example, it makes an assumption that the input and output are related linearly causing the target function to underfit and hence giving HIGH BIAS ERROR.
But the same model when used with similar test data, will give quite similar results and hence giving LOW VARIANCE ERROR.
I hope this clears the doubt.
Very intuitive explanation.
You have God-gifted talent to teach. You are a gem!!!!
I agree with your sentiment. He has such understanding to break down concept in a coprehensive manner
On the last graph you show, Error vs Degree Of Polynomials, you mixed the curves. The red one is for the training dataset whereas the blue is for the test dataset.
Thank you for good explanation of bias & variance..❤️
Good pedagogy and easy explanation. Thanks a lot
Perfectly explain sir
Awesome video. It explained many concepts significantly.
Great explanation. Thank you so much!
Very well explained. Thanks
Thank You so much Krish Sir..!!
Can you make a miniseries on Panel Data Analysis. You are by far the best statistics instructor here on RUclips.
Thank you for a detailed explanation of bias and variance. Great teaching!!
Best video...thankyou 🙏🙏
Thank you so much for clearly explaining this. I have tried so hard to get PhD's to explain this to me .. and never got a clear answer.
Thanks for this. Amazing explanation.
Very nicely explained. 👍
Love watching your video’s..You explain very well.