I followed along and copied the code in Jupyter, and that was not only super satisfying but extremely enlightening. I really never understood what the activation function was for until now. Thank you so much!
Small correction to the title: Its not why we need activation functions, its about why the activation functions cant be linear. Its a small but important difference.
No brother, many tutorials don't talk about this clearly and just move on with back propagation.... Like just imagine a beginner reading your comment without any understanding of linearity!
I followed along and copied the code in Jupyter, and that was not only super satisfying but extremely enlightening. I really never understood what the activation function was for until now. Thank you so much!
you should be proud of this video, its very good!
Explained super easily as always i learn a lot from your videos. You are very knowledgeable and very underrated i wish you a lots of success. 🤗
high quality content as always
awesome bro perfect explaination
Excellent. Really. Thanks.
That was awesome. Explained well in simple words. Waiting for torch crush course pls)
wow.... i understood it 😆
thats the difference when somebody explains it, who understood it as well
Small correction to the title: Its not why we need activation functions, its about why the activation functions cant be linear. Its a small but important difference.
I have a doubt, why not use a polynomial linear regression algorithm instead of NN, as that can also trace non linear data?
in fact, some like the logic of polynomial equations
How do you always know what will want ❤
👏👏👏
Due to nonlinearity. There; saved you 14 minutes.
No brother, many tutorials don't talk about this clearly and just move on with back propagation....
Like just imagine a beginner reading your comment without any understanding of linearity!
Thx m8
Hahaha ... Then what is non-linearality
@@mickeyk899 Opposite of linearity. You're welcome