@@SumanSadhukhan-md4dq Loss function is the error expression when considered for a single data point. When we sum up all the errors over all the data points, we get the cost function. So, basically, cost function is when we sum up the loss functions of all the data points. Hope this makes sense to you.
You deserve more respect bro....really i admire you...No one give interpretation and inferences like u. And please make a videos on Neural Network and NLP.
It takes patience, endurance and talent to be a teacher, but you make it look so easy day after day. I hope you know just how much we all appreciate you.
Still halfway trough the video, but I feel necessary to comment here. This is by far the best explanation of Loss function and the intuition leading up to it. I have enrolled in paid courses to learn this, and let me tell you the quality of explanation in this 53 min video is better than any other resources out there. Kudos to you...keep shining Nitish!!
I tried to learn from many places but not able to learn it in a well manner even I try krish naik. I love the way you teach bro its awesome to learn from you
As of today this video has 8k views. But i am sure in the next 1 year it will cross 1 Lac view. This is the best explanation of cost function derivation.
You are amazing!! Just the way we want the lectures to be!! When you take care of uppercase and lowercase chars while writing terms like X_train, y_train, it reflects your in depth understanding of the subject and the notations one should follow. You will reach great heights.
Hats off to you sir. Apart from immense knowledge, your patience and ability to simplify abstract topics is outstanding. You are a gifted teacher. Thank you is not enough.
15:54 Actually it should be yi^, Because that line has been drawn by the model, so it will be a predicted value right? And the above point should be yi. I think you have said it ulta. Could you please reply to this? And all your sessions are awesome. Thank you so much for such a great explanation.
I think he just accidentally marked y as y hat on graph but everything else is correct, because he referred y hat as the one predicted by the model and y as the actual result. So in the formula it makes sense that we're subtracting the predicted value from the actual value.
I took one course which has over 1M students still I find the course rubbish, like they did not go any deep. And here is this guy with just 220k subscribers(while commenting) and have god level teaching skills and going very deep in what and how. You deserve more than what you are getting. Hopefully people will find this channel and you get more for what you are doing. You are a just amazing 🔥
You are awesome in explanation I have never learn anything as simple as you explained. you made the complex things for me very simple. You are awesome.
I think the way you have cleared my concepts has given me more confidence to talk on ML models and their fine tuning. Really very intuitive understanding.
Sir apney (y hat) ko ulta likh diya (y hat) toh actual value hai and (yi) jo model predict kiya ..but ap bol sahi rahe ho likh ulta rahe ho syd...(yi) upr hoga and (y^) niche hoga q ki nichey wla model predicte kr rha is liya niche wla hoga.. Or di =(yi-y^) hoga or yi upr hoga and y^ niche hoga.. 15:30 Plz reply
Very good explanation for the linear regression algorithm. You covered the math behind it. I never thought that i could learn how algorithm works in the backend. Thanks for the explanation.
@ 16:00 there is small mistake If predicted is represented by hat symbol and actual by not hat Then y hat must be below and y must be above in y-axis line
16:00 mere khayal se pridiction value line pr honi chahiye or actual value to hum logo ne plot krai hi hai to jo line pr Dot bnaya hai (x,y) vo H^ hoga or jo actual point hai line wale point ke uper vo Hi hoga kunki vahi to actual data hai or Jo pdidict hua vo line pr hai to us hisab se Hi - H^ (actual value - predicated value) krenge tabhi to equation bnegi. Mere khayal se aapne vo glti se H^ actually point ko likha hai 15:55 pr
Clear and concise explanation!! Will it be possible to create one lecture series on in-depth Python? I have checked the one uploaded by you. But it has some missing lectures. Also, in some lectures the board is not visible. Thanks in advance!
Hi sir, You said that in OLS we don't use approximation process like differentiation etc, but while explaining for OLS, you went to explain gradient descent and then proved the formula for OLS. I have a doubt in that sir.
I completely understand this concept but i have a doubt should I make the derivation portion notes ,, if whether this formulation part I need to revise in future ,, plz reply if anyone knows
Your explanation is mind-blowing but my question is when we square all the data point then line will be form according to the squared data but our actual data is non square how it will adjust according to our real data
Finally in model. Your explanation is clearer for me compared to even Andrew Ng. I am not kidding. Thank you Nitish. :)
Yes, I too feel the same...
real
@@iamsomeone54 I came here just because I was not feeling comfortable with andrew ng
Can anyone differentiate loss and coss function?
@@SumanSadhukhan-md4dq Loss function is the error expression when considered for a single data point. When we sum up all the errors over all the data points, we get the cost function. So, basically, cost function is when we sum up the loss functions of all the data points. Hope this makes sense to you.
You deserve more respect bro....really i admire you...No one give interpretation and inferences like u.
And please make a videos on Neural Network and NLP.
u r best mentor/teacher i ever came across for data science....
It takes patience, endurance and talent to be a teacher, but you make it look so easy day after day. I hope you know just how much we all appreciate you.
Still halfway trough the video, but I feel necessary to comment here. This is by far the best explanation of Loss function and the intuition leading up to it. I have enrolled in paid courses to learn this, and let me tell you the quality of explanation in this 53 min video is better than any other resources out there. Kudos to you...keep shining Nitish!!
Awesome man.. India is moving towards AI/ML and in the journey you are a blessing.. Anybody can learn AI/ML from your videos...
Glad you liked them
Dear Nitish, You are best DS mentor. Thanks for creating the channel for those who seeks the job in DS.
I tried to learn from many places but not able to learn it in a well manner even I try krish naik. I love the way you teach bro its awesome to learn from you
best tutor for ML on youtube. know one teaches with this much depth
As of today this video has 8k views. But i am sure in the next 1 year it will cross 1 Lac view. This is the best explanation of cost function derivation.
I have never seen a teacher who teach math's soo easily. I mean hats off to you sir. Every student deserves teaching like you give
You are amazing!! Just the way we want the lectures to be!! When you take care of uppercase and lowercase chars while writing terms like X_train, y_train, it reflects your in depth understanding of the subject and the notations one should follow. You will reach great heights.
Finally I found the best teacher which will help me in ml journey
Hats off to you sir. Apart from immense knowledge, your patience and ability to simplify abstract topics is outstanding. You are a gifted teacher. Thank you is not enough.
15:54 Actually it should be yi^, Because that line has been drawn by the model, so it will be a predicted value right? And the above point should be yi. I think you have said it ulta. Could you please reply to this? And all your sessions are awesome. Thank you so much for such a great explanation.
Hi,
I also observed the same thing, was checking for any existing comments and found this.
I think he just accidentally marked y as y hat on graph but everything else is correct, because he referred y hat as the one predicted by the model and y as the actual result. So in the formula it makes sense that we're subtracting the predicted value from the actual value.
Best explanation that exists, hands down!
You are a boon for us. As a gratitude, I dont skip the ads in your video.
machine learning is fun right now but time consuming too. All thanks to you sir.
Awesome vedio I couldn't this much detailed explanation even from great RUclips data science teachers
I took one course which has over 1M students still I find the course rubbish, like they did not go any deep. And here is this guy with just 220k subscribers(while commenting) and have god level teaching skills and going very deep in what and how. You deserve more than what you are getting. Hopefully people will find this channel and you get more for what you are doing.
You are a just amazing 🔥
You’re helping so many lives god bless you!!
SPEECH-LESS!!!!watching your videos is the best decision i've everrrr madeee.
Sir u are really .......Superrr
Respect ++ from NIT Raipur
You are awesome in explanation I have never learn anything as simple as you explained. you made the complex things for me very simple. You are awesome.
Loved ur explaination sir, Really got a taste of calculus after so long.
thank you sir for all such beautiful content. there is a small mistake at 16:00. predicted and actual values are named incorrectly
The most underrated data science channel on RUclips
This playlist is a blessing. Thank you Sir🙏
I think the way you have cleared my concepts has given me more confidence to talk on ML models and their fine tuning.
Really very intuitive understanding.
Woww..... ! Satisfied with full clarity. Thank you sir.
One of the best explanation i saw
Thanks a lot sir😀
Such an Amazing teacher with such an incredible content and explanation.
Very grateful to you Nitish!
A very biggg Thank you too you.
Really great service to the society. God bless you. You get everything in life
Sir apney (y hat) ko ulta likh diya (y hat) toh actual value hai and (yi) jo model predict kiya ..but ap bol sahi rahe ho likh ulta rahe ho syd...(yi) upr hoga and (y^) niche hoga q ki nichey wla model predicte kr rha is liya niche wla hoga..
Or di =(yi-y^) hoga or yi upr hoga and y^ niche hoga.. 15:30
Plz reply
thank god someone noticed, i was sooooooo confused T_T
@@niketasengar9191 yaah so mujhe abi tak solution ni mila..! Apko mila?
bhai kya teaching style superb.
Sir, You are such a Genius, I never ever seen like you...🙇
Very good explanation for the linear regression algorithm. You covered the math behind it. I never thought that i could learn how algorithm works in the backend. Thanks for the explanation.
@ 16:00 there is small mistake
If predicted is represented by hat symbol and actual by not hat
Then y hat must be below and y must be above in y-axis line
great explanation . concept clear . mathematical intuition was too good .
selfless service once I would get placement I would surely do something to grow this channel
No words to say after watching your videos just love you brother...
16:00 mere khayal se pridiction value line pr honi chahiye or actual value to hum logo ne plot krai hi hai to jo line pr Dot bnaya hai (x,y) vo H^ hoga or jo actual point hai line wale point ke uper vo Hi hoga kunki vahi to actual data hai or Jo pdidict hua vo line pr hai to us hisab se Hi - H^ (actual value - predicated value) krenge tabhi to equation bnegi. Mere khayal se aapne vo glti se H^ actually point ko likha hai 15:55 pr
Yup, I also think the same
you are best ever nitishbhaoi hats off to you
No one can explain more simpler than this
Awesome videos, You are the best teacher for data science
Believe me Your videos are great I can say better than Andrew NJ thank you for all these video
Loved it!!! No one can explain like you sir! ❤❤❤❤
Wow...
I got full clarity now😊
you are the great bro what a way of teaching, incredible
.....no doubt knowledge has no boundary alots of love from pakistan❣
you have created a gold mine...
I rarely subscribe any channel in here, so, I subscribed yours. keep up the good work till advance level and beyond
Next Level Explanation.......thanks for sharing🫡🙂↕️
Hidden Gem On RUclips
You deserve a lot of respect. Thank you for the effort!!!!!!
You deserve million views
16:00 I feel like he said opposite. Am I wrong or he got confused?
What a great explanation 👏🔥
Hats off 🫡🙇♂️
bahut hi pyara video tha sir thankyou
At 16:20 , isn't the yi hat the prediction that lie on the line, not the actual y
true
Thanks sir for the video🔥🔥🔥🔥🔥🔥🔥🔥🔥 Hats off to you
That was exceptionally good... Thank you for this amazing explainer
thankyou so much sir for adding so much value...
Brother you nailed it in explanation, best tutorial
on (19:11 min) you have taken y-y^ and y^ is mx+b then why you have taken (y - mx-b) why there is minus sign between mx and b
- sign when multiplied to
(mx +b) will change it to
(mx-b)
I hope ki ap hamesha premium content free mai ❤️☺️☺️☺️❤️......❤️
great learning bro,
maja a gaya ,
love you bhai.
The best Explanation ever!
Amazing Sir !!!!
Love your videos
the way you teach is awesome
Thanks sir God bless you ..Keep continue
where is your Patreon , you deserved lots of love and respect, Thank you for everything :) GBU
Clear and concise explanation!!
Will it be possible to create one lecture series on in-depth Python? I have checked the one uploaded by you. But it has some missing lectures. Also, in some lectures the board is not visible. Thanks in advance!
Clean and crisp Explanation
Best Teacher Ever!
Sir Ji Tussi Great Ho !! :)
Thanks sir for this explanation on linear regression
Sir Your videos are amazing
Watching like movie.... thank you sir... 🙏🏻
Awesome Sirji, tuse great ho
31:02 if first derivative is zero it can be either maxima or minima. so if we calculate our equation for this how to confirm is this maxima or minima
You course is really great,can you cover time series as well
Man dil jeet liyaa❣️🔥
that's what we called learning algorithms from scratch. sir, can u tell us which book you preferred to learn this from? just love your content.
Will soon be uploading a video on this topic.
U are the best.........really amazing
after getting the m 's value at 39:27 , i think (xi -xbar) can be cancelled?
Hi sir, You said that in OLS we don't use approximation process like differentiation etc, but while explaining for OLS, you went to explain gradient descent and then proved the formula for OLS. I have a doubt in that sir.
Wish I could like the video twice!
Thank You Sir.
@12:35 can anyone explain, what does the penalize means here?
Great Video. clear explanation
All doubts cleared thanks bro
@16:22 - di should be y^-yi, right?
best video for LR maths
I have one question in the interview do they ask how the formula is derived?
No, but the understanding gives you Conceptual clarity about the topic
In the error function, along with m and b, y also depends on x, right?
Will you b working on sgd regressor in upcoming videos?
lovely explanation
I completely understand this concept but i have a doubt should I make the derivation portion notes ,, if whether this formulation part I need to revise in future ,, plz reply if anyone knows
Your explanation is mind-blowing but my question is
when we square all the data point then line will be form according to the squared data but our actual data is non square how it will adjust according to our real data
sir, you are a legend