Hi nitish sir, at 10:57 you said all the less impacted coefficients will be 0 but you said in ridge regression that when lambda is increased it impacts the highly impacted coefficients which then tends to infinity and so how in lasso we are able to decrease the less impacted coefficients , while increasing the lambda ???? will be looking for your reply nitish sir.
He never said it.. instead he said, in ridge, if you increase the lambda, the coefficient will tend towards zero but they never will be zero and also the higher the weightage of coefficient the more will be they affected
Hi sir, thanks a lot for your videos. I really learnt a lot. But, I have a small question should we consider the scale of independent variables? Wouldn't scale have an impact on coefficients?
sir your videos is so interesting but my question is circle and loss function contor plot pr hamara solution mil raha hai ridge regression sy jabky woh point error hoga because woh point tu local minima yeah global minima nahi hai
sir my question is ky apny previous video ma kaha tha m higher hai tu woh fastly decrese kary ga jabky is ma apny kaha ju less important columns hain it think jis ka m small hai woh fastly equal to zero ho jahain gai plz solve my douts
As per SVM discussion, lambda is inversely proportional to alpha value. So, lambda increases bias should be low as it will lead to overfitting? Please let me know, if my understanding is right or wrong?
why there is no learning rate hyperparameter in scikit-learn lasso/Elasticnet . As it has a hyperparameter called max_iteration that means it uses gradient descent but still there is no learning rate present in hyperparameters . if anyone knows please help me out with it.
Sir, thank you. You have described the effect of different values of Lamba on the feature selection. However, for a study with n number of features how do we know which lambda is the best no overfitting or no underfitting? Is there a standard formula/script that could be used to identify this value for lambda for any study?
Sir your videos deserve lot more views than it has. Best content found ever !!
Ur videos really clears everyone's doubt. Hatts off to ur dedication.
Thank you for the help sir ❤
Tip : The Graphs of Coeff vs Alpha or R2 vs Alpha are better visualised when Alpha is taken on a log scale.
🙏Sir, You are my favourite 🙏
Hi nitish sir, at 10:57 you said all the less impacted coefficients will be 0 but you said in ridge regression that when lambda is increased it impacts the highly impacted coefficients which then tends to infinity and so how in lasso we are able to decrease the less impacted coefficients , while increasing the lambda ???? will be looking for your reply nitish sir.
He never said it.. instead he said, in ridge, if you increase the lambda, the coefficient will tend towards zero but they never will be zero and also the higher the weightage of coefficient the more will be they affected
Thank You Sir.
Awesome sir...
Thank you so much sir🙏🙏🙏
Thanks A Lot Sir !!!
@25:31 r2 score is negative. but r2 score can not be negative. Then how r2 score is negative here?
Hi sir, thanks a lot for your videos. I really learnt a lot. But, I have a small question should we consider the scale of independent variables? Wouldn't scale have an impact on coefficients?
sir your videos is so interesting but my question is circle and loss function contor plot pr hamara solution mil raha hai ridge regression sy jabky woh point error hoga because woh point tu local minima yeah global minima nahi hai
Awesome video, sir ek request hai. Please ek video banaiye for Hypertuning L1 and L2 k liye so that hum best choose ker payein for both.
Okay. Noted
@@campusx-official Sir Code : Understanding of Lasso Regression Key points. not able to download ..pls help
sir my question is ky apny previous video ma kaha tha m higher hai tu woh fastly decrese kary ga jabky is ma apny kaha ju less important columns hain it think jis ka m small hai woh fastly equal to zero ho jahain gai plz solve my douts
As per SVM discussion, lambda is inversely proportional to alpha value.
So, lambda increases bias should be low as it will lead to overfitting?
Please let me know, if my understanding is right or wrong?
why there is no learning rate hyperparameter in scikit-learn lasso/Elasticnet . As it has a hyperparameter called max_iteration that means it uses gradient descent but still there is no learning rate present in hyperparameters . if anyone knows please help me out with it.
Sir please make any telegram or whatsapp group for Student discussion. Thanks!
THANK YOU GURU
Are you doing any project
Sir, thank you. You have described the effect of different values of Lamba on the feature selection. However, for a study with n number of features how do we know which lambda is the best no overfitting or no underfitting? Is there a standard formula/script that could be used to identify this value for lambda for any study?
by cross validation technique you will get the best lamda
I have 1 confusion, ||W||^2 would be lambda * (W0^2 + W1^2 -----) right not lambda * (W1^2 + W2^2)
consider only the slope as W0 is intercept you don't have to consider it.
Its summation i=1 to n Lambda Wi²
Thanku sir
13:05
why it's called lasso regression?
Least Absolute Shrinkage and Selection Operator
can anyone pin one note link fot this video
@CampusX
Sir notes thode dhang se bna liya karo
paid subs bhi le rakha hai pata nahi konsi chiz kha jaa rhi hai revesion ke time.