Regression Loss : L1 loss, L2 loss, Huber Loss Classification Loss : Hinge Loss (SVM) , Cross Entropy (Binary CE ( sigmoid ) , Categorical CE ( softmax ) , Sparse Categorical CE )
Notes: cost/ loss function: A function which associates cost with function. Decision using costs, for ex Google maps, loss = diff between actual and predict values, cost = sum of all losses, MAE is the cost function for linear regression. MAE:L1 loss MSE: L2 loss, loss is per observation and cost is for the whole class. Our goal is to find the cost function. Loss functions in the regression. Any algorithm which uses optimization uses loss function.
Your explanations are very intuitive and easy to understand. Thank you very much for running this channel - it is helping me a ton in learning data science!
very good one i was watching other video which are provided by some institute am unable to under stand that but after watching your video the doubts are cleared please keep doing videos
Hello Aman ! great video. in simple language its clear. Can you please make video for classification type as you said it will be different like cross entropy or some thing else ?
@@UnfoldDataScience when we use simple linear regression and multi-linear regression then it use OLS by default or it use GRADIENT DESCENT for finding the best fit line ? please answer my question
Your teaching is excellent bro.... Others.. use lots of concept to explain one concept make us to confuse.... But your method is very simple... and can be understood by all.... 👌👌👌
Hi Aman, I do have a question - We use MAE/MSE/RMSE for regression problems even if it's Decision Trees. But when it comes to classification we use Log Loss for Logistic, Hinge Loss for SVMetc For decision Tree is there anything seperte it is based on entropy or the gini impurity? Also NB just acts like a lookup table right, how about there
@@UnfoldDataScience bro being true there is nothing to thank me but i have a request try some end to end projects i really want to see what is your approach towards problems
so, loss function : y_actual - y_predict cost function: l1= MAE : (1/n)( sigma(y_actual - y_pred)) l2 = MSE : (1/n)( sigma((y_actual - y_pred)^2) did I get correctly???
Thank you so much! You're a great teacher, Mr.Aman! 🙏
Regression Loss : L1 loss, L2 loss, Huber Loss
Classification Loss : Hinge Loss (SVM) , Cross Entropy (Binary CE ( sigmoid ) , Categorical CE ( softmax ) , Sparse Categorical CE )
Btaiyo bhai softmax kha use karte hai?
In 10 minutes you taught me more than my lecturer did in 4 months
🤣🤣😂😂😂
😂😂🤣🤣🤣🤣
Ghonchu😂
Notes: cost/ loss function: A function which associates cost with function. Decision using costs, for ex Google maps, loss = diff between actual and predict values, cost = sum of all losses, MAE is the cost function for linear regression. MAE:L1 loss MSE: L2 loss, loss is per observation and cost is for the whole class. Our goal is to find the cost function. Loss functions in the regression. Any algorithm which uses optimization uses loss function.
perfect. this is how i want someone to teach.
Please cover every aspect of machine learning.
Thanks A lot for your feedback.
Please do not stop making the videos. You are doing great explaining these topics in simple terms. Thank you!
Would love to learn entire ML from you.
Thanks Ayesha.
Thank you Aman sir!! You are the best Data Science teacher.
Your explanations are very intuitive and easy to understand. Thank you very much for running this channel - it is helping me a ton in learning data science!
You're very welcome! Please share channel with friends as well. Thanks again.
Thanks for the gentle and uncomplicated explanation.
Welcome Vikrant.
Man you are too good at this thing
keep up the good work .
Thanks Usman.
superrrrrb explanation. even a layman can understand
Thanks a lot
Very nicely explained Sir!!
Great. 👍. Concluded video
Thanks a lot Sheikh.
Nice explanation
Very nice explanation
Thanks Vidya.
very good one i was watching other video which are provided by some institute am unable to under stand that but after watching your video the doubts are cleared
please keep doing videos
Thanks for watching. Pls share with friends
Top Class Explaination
Thanks Paramesh.
Brilliant Dude ! Superb explanation this channel needs growth
Thanks a lot.
Perfectly explained... Got it in one view 🙌🙌
Thanks for watching.
SIr, in some explanations I've seen they use 1/2n for mean and formula is written as 1/2n (nEi=1)(h(x*i) - y*i)square. Please explain.
Hello Aman ! great video. in simple language its clear. Can you please make video for classification type as you said it will be different like cross entropy or some thing else ?
Hello sir when are you going to have next q & a session and wat time?
This weekend we ll hv, I ll announce on community section of my channel.
great explanation sir !!
Thanks Raju.
Great Sir...
Thanks for this video sir👍❤️
Welcome
Well explained. Than q
Thanks Shiva
Why we use gradient descent. because sklearn can automatically find best fit line for our data.what is the purpose of gradient descent.
Gradient descent is a generic method to optimize parameters used in many ways. This is not related to only one algorithm as such.
@@UnfoldDataScience when we use simple linear regression and multi-linear regression then it use OLS by default or it use GRADIENT DESCENT for finding the best fit line ? please answer my question
great explanation.
Excellent
Thank you! Cheers!
Big Thanks, Nicely explained 👍
Welcome Archana.
your the best.. ;)
THank you
this is such a perfect way of explaining it
thank you so very much
You're very welcome Abuzar.
Your teaching is excellent bro....
Others.. use lots of concept to explain one concept make us to confuse....
But your method is very simple... and can be understood by all....
👌👌👌
Thanks Fazil. Please share with others as well. Happy learning 😊
finished watching
Thanks
Do more videos👍
Sure Akhilesh.
Hi Aman, I do have a question - We use MAE/MSE/RMSE for regression problems even if it's Decision Trees. But when it comes to classification we use Log Loss for Logistic, Hinge Loss for SVMetc For decision Tree is there anything seperte it is based on entropy or the gini impurity? Also NB just acts like a lookup table right, how about there
Thank you. It was helpful
Glad it helped!
Nicely explained, Subscribed 👍
Thank you.
Welcome.
Sir can we relate this |w|^2 L2 & |w| L1 with ridge & lasso?
We will understand lasso and ridge in detail in other video.
Sir am confused about what does cross val score tells and between loss function in a classification model
Cross Validation is a different cocept, understand it here
ruclips.net/video/rPlBijVFw7k/видео.html
@@UnfoldDataScience thanks for helping me out sir.
I have one doubt sir , when to use Mae, when to use Mae and when to use rmse please
In case your data has outliers, use mae, not squared ones(rmse, mse)
@@UnfoldDataScience Sir, Is it mean absolute deviation or median absolute deviation as mean can be impacted by outlier? Thanks for great video.
Sr plss explain cross entropy explaination
Yes will cover.
How can you take half of 68 here
I think in equation y= MX+c we can use beta0= 150 and beta1 = 0.5 then equation should be y= 150+0.5*176 ??
@faie veg, here independent variable is height and dependent variable is weight so the equation is y=150+0.5*68.
Sir how u applying beta 1 value 150 or .5
There are recommendations to be followed by research.
thank u sir!
Welcome Krishna.
How do you pick the value for beta0 and beta1
thanks
Thank you.
@@UnfoldDataScience bro being true there is nothing to thank me but i have a request try some end to end projects i really want to see what is your approach towards problems
Thank you Amen, this was really beautiful :-)
Welcome again.
so,
loss function : y_actual - y_predict
cost function:
l1= MAE : (1/n)( sigma(y_actual - y_pred))
l2 = MSE : (1/n)( sigma((y_actual - y_pred)^2)
did I get correctly???
Confusing.. first he called mae as cost function and later on he called the same thing as loss function for linear regression
Inshort.. loss function = cost function and can be used interchangeably
Cost function is calculated over the entire data set, and loss for one training instance
I am b.com pass out can i became data scienctist
Not understand what is difference in cost and loss function because mse is cost function as well loss
Sir please share your mail id for my cv review…. Unfolddata science mail id does not work. Please share
finished watching
Thanks 🙏
Welcome Pranjal