Can you talk about liquid neural networks? I’m interested to know if that’s a revolutionary work that deserves more recognition and following. arxiv.org/pdf/2006.04439.pdf
Artem, i want to thank you, not only for publishing an excellent material (from the 100's of DL/ML videos i've saved, yours is top 5 - really), as well as having a great intonation, that helps A LOT in capturing the attention in this day and age of constant distractions. THANK YOU 🙌
Back prop is a hard, heavy thing to explain, and this video does it extremely well. I mean, that section 'Computational Graph and Autodiff' might be the best explanation of that subject on the internet. I'm very impressed - well done!
@@George70220 I don't think CuriousLad meant it as a diss, it's just that when Artem made the video, he explained the Calculus section as a background information. The partial derivates and gradient descent wasn't the main topic of the vid, yet you could show this to Calculus I student and they would be thanking him for the explanation, even if they have not interest in learning back propagation! That's why funnily enough, while the intro Calc topics wasn't the main part of the video, that portion would be very helpful to anyone starting out int Calc!
I dont agree for example the act of minimizing loss function and gradient descend were not properly linked there were just two pieces of information unprocessed dumped in series
I found it unnecessary. Anyone who clicks on a video about back propagation likely already knows calculus, and if they don’t, that short primer is not going to be enough foundation for the rest of the content.
It makes sense that you would cover both computational neuroscience AND machine learning since they both play a significant role in AI research. The sort of content you're making is definitely 3Blue1Brown level. Keep up the good work!
I knew that calculus is important for machine learning but never knew that 12th grade derivatives are that much important. When you said about chain rule, that bring me back to my school days , I never thought that derivatives, integration and probabilities will be used this way in future. Well explained video. Thanks for sharing this knowledge and conveying process much simply.
I've been trying to get into ML for quite a while now. This is by far the best explanation of gradient descent and back propagation hands down!!! Amazing work!!!
Understanding something complex requires high intelligence. Explaining it simply requires even higher intelligence. You are one of the best teachers that I have encountered in my life! I'm grateful!
all these basic concepts such as derivatives, least square method, I'm learning it in my college. watching these kind of machine learning videos has made me understand the practical applications of these theoretical concepts a bit better now 😌
I actually pictured this all in my head successfully where I thought I had everything in a canonical deep neural network figured out the other day. It’s one thing to hold it, it’s another to do the detailed, gritty work of explaining it in video format. Very well done.
Damn, I was wondering where you've been since over half a year, whilst I was stuck in backpropagation😂 and here you came back like a true mind reader. Glad to see you back❤
I just have to say this goes way beyond the quality of the many chainrule videos I've seen so far. Good job man, you've got some impressive skills to keep me watching a math video and take notes past my usual bedtime
I should be watching gameplays but here I am procrastinating. Jokes aside, im 13 minutes into the video and astonished by your crystal clear explanations and the quality of your material, this is gold.
That was an outstanding explanation. Your ability to explain higher mathematical concepts in such simple terms is really an amazing service to the rest of us who wanna understand these subjects but don’t have a mathematics degree. Thank you.
This is the best resource I have found to learn backpropagation. The visualization of each concept made this very clear. I can't even imagine the amount of effort you have put into this video.
This is a visual masterpiece! Well done! Much of this was a review for me as I took the time to go through all this last year. I did an implementation of the MNIST handwritten number neural network and had to learn all the calculus covered here to work out the backpropagation math. You really do have to dig in to it to get a good handle on it but it's fun stuff.
in simple words, backpropagation is method of finding gradient descent. to minimising the losses their is need to find the correct values of each function and it is get by chain rule which is based on darivatives and calculus. it's direction in backward direction hence it is known as backward propagation. still so much confusion in mind regarding this process. video is very useful and editing is extraordinary.
This video explains the mathematical base of neural networks in a way I understood it the frist time enough to be able to explain it to somebody else. Thank You for that. I can't even imagine how much work you put into the animations. A master piece!
Animation is great, but more and more people are doing it now. What make this special is the story, the complexity build-up is perfect and efficient. One needs a deep understanding of the subject and strong teaching skills to produce this.
As a student in this business, who has passed through a bunch of professors, I can say with confidence! With this trader, you will both learn and earn and, importantly, receive advice. Everything is competent and clear, without a bunch of any unnecessary movements! Keep up the good work!🤣
The best and most understandable explanation I have ever seen. You explained the essential basis of Artificial Neural Networks so beautifully. I really congratulate you
Most Comprehensive Explanation EVER my opinion : better than 3b 1b, No offence to 3b 1b Hes great at it and one of the pioneers who did these kind kf visual explanations. But i like your explanation as it is slow paced & comprehensive
Yeah 3b1b definitely deserves respect from me, but I think he will to recognize this video is very carefully done. I like that these people just care about the truth and the perfection, and even with a little bit of envy, care about the best product being done.
I just watched this video after completing my first lecture of Deep Learning on Backpropagation and Gradient Descent. thanks man! appreciated. Really solid content.
Guru of Fundamentals. I can't resist subscribing to your channel and watch all of your videos. The way you explained Chain Rule : the logic behind it is awesome. I am trying to visualize the Quotient Rule of Derivatives in your way. A good Teacher always makes you THINK 🙏
I wish the Chain Rule was explained in this manner when I was in university. I understood how to do it on paper just fine, but this explanation makes the reasoning behind it make complete sense.
Very good video, very well explained. But there is one problem you didn't mention. When training very deep neural networks and using a sigmoid or tanh funktion as the activation funktion, Backpropagation loses it's "powers". The learning prozess becomes extremely slow and results are suboptimal. One of many solutions to this is a ReLU or a ELU funktion in the hidden layers instead of Sigmoid or Tanh. And also how we initialize our weights at the beginning. For example the He-Initialiazation...
Wow wow wow wow! From what I gather here, the key is in understandng ML predictions is that we are looking to fit the function f(x) = b + k1x + k2x^2 + k3^x^3 + k4x^4 + k5x^5. The machine just turns the dial until it finds the best fit using function such as mae or mse. So this is why ML needs so much GPU power then! I'm mind blown, in case you didn't notice the wows earlier. :) Thank you so much for this.
Well, kind of. In ML in general we are not fitting that exact function. We can fit any function and those functions in real deep learning models are very complex.
@@szef_fabryki_azbestu The function above is the only function that is gradually adjusted by the stochastic gradient descent(SGD). Watch the course again. The weights and bias that the SGD is attempting to determine are those of the above function. They are used to make predictions in Deep Learning. You're confusing concepts here. Think again please.
@@BijouBakson Unfortunately you are confusing concepts. That's not how this works. Sure in that particular example we are optimizing to get parameters for that particular function. But that just a simple example.
@@szef_fabryki_azbestu It seems like we are stuck in a back and forth turns, accusing each other of confusion. Maybe you are right! I mean, you could be Einstein for all I know. So... Please help then, in summary, what do they refer to when they say weights and bias? How do you understand it?
@@BijouBakson Weights and biases in NNs are parameters a and b for linear functions: y = a*x + b That's for 1 neuron. For one layer of neurons we can write it as a matrix multiplication and vector addition: Y = A*X + B On top of those functions we usually apply some non-linearities like ReLU, tanh, sigmoid and so on. In classical multilayer feedforward networks we stack those layers on top of each other e.g. f(g(h(x))). Example fully connected network with 3 layers and tanh as activation function can be written as: Y = A_3*(tanh(A_2*(tanh(A_1*X + B_1)) + B_2)) + B_3. Here we have weights in matrices A_1, A_2, A_3 and biases in vectors B_1, B_2, B_3. So no, in general case when we train NN we do not fit f(x) = b + k1x + k2x^2 + k3^x^3 + k4x^4 + k5x^5 function. Of course you can say that you just meant polynomial approiximation of functions (Taylor expansion) but you explicitly mentioned only that particular function f(x) that is polynomial of order 5 and what's more neural networks are universal functions approximators so they can approximate any function to any degree of accuracy but only if the network's activation function is not a polynomial.
31 years now, had like 13 years of math in school and another 5 years at university, first time i really understood how derivatives work, bcs visualisation instead of "you calculate it this way and derive it that way, now memorize"
@@ArnaudMEURET Sorry, but I don't believe anybody who have no idea what a tangent line of a function in point x looks like, and what it means (despite milion of excersises teaching you it's meaning), and dozens of graphs in literally every workbook, could actually go through 5 years of math related subject. This guy is straight up lying or trolling.
@@WsciekleMleko exactly. People shit on school because "muh education system bad" and forget about all of the interesting stuff they actually teach. No, you're not heroes trying to fight against the big bad. You're just lazy and want an excuse to keep being one.
Um, how long did it take you to graduate grade school? 13 years of math! Even if you did start learning maths in PK or K to 12, it’s basic mathematics until about grade 4 when you’re starting to build off the foundation of that such as algebra, geometry etc In other words, the maths ain’t mathing!
I have been doing ML research for a few years now but somehow I was drawn to this video. I am glad to say that it did not disappoint! You have done an amazing job, putting things in perspective and showing respect to calculus where it is due. We forget how a simple derivatives powers all of ML. Thank you for reminding that!
I cannot tell how much excited this video has got me once I realized I am understanding every single step effortlessly.😂😂😂 Thanks so much for the explanation. God bless you!🙏🙏🙏🙏🙏🙏
Where you were so far.... I've been trying to understand this concept from past 2 years, and now it's cleared after watching this video. Honestly, my maths was not up to the mark. After seeing your video so many important concepts are cleared. Don't have enough words to thank you.... God bless you. I'll share your videos with my friends. Please keep it up.....🙏🙏🙏🙏🙏🙏🙏🙏🙏
i just made that in python for a simple quadratic equation.....THANK YOU !!!! i just learned python and machine learning !!!!!!!!!! Using desired y=0 i could also find one solution of the equation... wow i love this so much!! The only different i did was to make x the weight and not the coeficients which i wanted them to be fixed inputs What you helped me realise is that any system that can put in a computational graph like that 30:04 ...it can be embeded backpropagation regardles THANK YOU im out of words Also when the next loss is bigger or equal than the preview loss after one iteration... i divided the learning rate by a factor of 2 or 10 for more accuracy and if the next loss was smaller than the preview one i multiple the learning rate by a factor of 1.1 to 1.5 to speed up the proccess...thus having results in hundreds or even thousands less generations/iterations and less time consuming!!!!! I can use this for optimizing my desired outputs in any system !!! JUST WOW!!
Man, you really nailed it, especially the Computational Graph and Autodiff part. I heard so many times about them in lectures on Stanford and others. However, this was impressive.
Watching this video was like a breath of fresh air after some heavy math calculations! The visual explanations really helped solidify my understanding of backpropagation. I appreciate how clear and easy to follow the graphs were. Keep up the fantastic work! Can't wait for more graphic doses like this.
Very well explained how backpropagation and how the loss function helps in determining the optimal minimum by using calculus, great detail which helps newbies like me understand this complex topic much better.
What an amazing video. I hope one day they come up with some world prize for 'free education heroes'. 173k views for a video like this is simply disgusting. This guy deserves maybe 2 billion views. God damn it, that makes me mad.
Ehm, ya do realize this flies over the head of most people, you'll have to stack thousands up to find one person who is interested and can understand this properly. It is also not really needed for a plumber or a bakery cashier to understand ML improvement/approach velocity which is what I'd call this in a sense. Or, a visual way to pick a good method for it.
This is one of, if not the, best videos I’ve seen that throughly explains back propagation. It will definitely help me to be able to better explain the algorithm to others, so thank you for creating it.
Excellent presentation. You made it let from basic calculus, machine learning is just one simple step. What would be interesting is - what are the theoretical underpinnings of this method? When do we say learning is successful? What is the computational complexity of neural networks?
The only relationship is that ANNs store something and brain NN also store something. That's it. The analogy ends here. Everything else is completely different =)
WOW!!! The amount of animation you have made is just incredible. I would really like to not know about backprop again in order to fully appreciate this video!
Glad to see ML related video from you ! As you have neuroscience background I would love to see some video that compare the current state of the art architecture work in ML with some of the inner working of the brain. For exemple if there are any structure in the brain with some ressemblance with GPT/transformers architecture, even thought the brain is light-years away I think that could be interesting :)
This is just superb, thank you Artem! Timing couldn't be any better as the gradient descent algorithm was mentioned in Grahaene's "How We Learn" which I'm currently reading.
Some parts were really hard, can't deny that. Thank you for your work, it is amazing. How are you able to be so confident with these concept while being still at PhD level?
I'm curious: Are your video editing skills superior, or do your tech skills take the lead? Your expertise is remarkable! I'd love to see a video on Transformer from you.
this is the only thing I never understood, I hope to finally understan it. I's weird how this video gets recommended just as I wanted to google about backpropagation
Waiting patiently for the second video 🫰♥️. Much love from Kenya, thank you for making me understand back propagation. Started watching your channel because of Obsidian, stayed for the AI lessons 🫰.
Lucid explanation...I am yet to get my head fully around all of it but if I review this a couple of more time I am sure I will...thanks for this it has rekindled some interest in basic math... I was reminded of 3Blue1Brown channel when I was watching this...
Join Shortform for awesome book guides and get 5 days of unlimited access! shortform.com/artem
Can you talk about liquid neural networks? I’m interested to know if that’s a revolutionary work that deserves more recognition and following.
arxiv.org/pdf/2006.04439.pdf
Artem, i want to thank you, not only for publishing an excellent material (from the 100's of DL/ML videos i've saved, yours is top 5 - really), as well as having a great intonation, that helps A LOT in capturing the attention in this day and age of constant distractions. THANK YOU 🙌
Back prop is a hard, heavy thing to explain, and this video does it extremely well. I mean, that section 'Computational Graph and Autodiff' might be the best explanation of that subject on the internet. I'm very impressed - well done!
You two are the best channels I have found in the SoME episodes. It's great to see this interaction between you guys.
Love your videos
If there is no mention of sine waves in neural networks then it won't be total.
Where is that section 'Computational Graph and Autodiff' ?
Yeah really helped me get the significance of autodiff
Funnily enough, the calculus portion of the video is probably one of the best explained I've seen
Why would that be 'funnily enough'? What a diss lmao.
@@George70220 I don't think CuriousLad meant it as a diss, it's just that when Artem made the video, he explained the Calculus section as a background information. The partial derivates and gradient descent wasn't the main topic of the vid, yet you could show this to Calculus I student and they would be thanking him for the explanation, even if they have not interest in learning back propagation! That's why funnily enough, while the intro Calc topics wasn't the main part of the video, that portion would be very helpful to anyone starting out int Calc!
I dont agree for example the act of minimizing loss function and gradient descend were not properly linked there were just two pieces of information unprocessed dumped in series
I found it unnecessary. Anyone who clicks on a video about back propagation likely already knows calculus, and if they don’t, that short primer is not going to be enough foundation for the rest of the content.
Nasdaq please buy toggle 0:25
"Wait, It's all derivatives?"
"Always has been"
Great work pal. Provides excellent clarity.
Looking forward to the second part.
😂 Turns out back propagation isn’t just magic
this's by far the most clearer explaination and simplification of backpropagation i have watched
It’s probably the best explanation of backward propagation. Hats off to your hard work and saving this so valuable content.
It makes sense that you would cover both computational neuroscience AND machine learning since they both play a significant role in AI research. The sort of content you're making is definitely 3Blue1Brown level. Keep up the good work!
He also managed to squeeze an entire calc 1 course into this single video. It's amazing
I knew that calculus is important for machine learning but never knew that 12th grade derivatives are that much important.
When you said about chain rule, that bring me back to my school days , I never thought that derivatives, integration and probabilities will be used this way in future.
Well explained video.
Thanks for sharing this knowledge and conveying process much simply.
Very True!
By far the best ML explanation I have seen on internet.
This has to be one of the greatest explanation of the inner working of learning in ML, I love it!
indeed
The visuals on this video is from another planet . So Good !!!!!!!!
This just might be the most underrated video on Back Propagation that I've ever seen! I hope more people come across this
I've been trying to get into ML for quite a while now. This is by far the best explanation of gradient descent and back propagation hands down!!!
Amazing work!!!
Understanding something complex requires high intelligence. Explaining it simply requires even higher intelligence. You are one of the best teachers that I have encountered in my life! I'm grateful!
all these basic concepts such as derivatives, least square method, I'm learning it in my college. watching these kind of machine learning videos has made me understand the practical applications of these theoretical concepts a bit better now 😌
I've seen probably 20 videos on this and your explanation of the derivatives for someone not in calculus was really helpful. thanks.
I actually pictured this all in my head successfully where I thought I had everything in a canonical deep neural network figured out the other day. It’s one thing to hold it, it’s another to do the detailed, gritty work of explaining it in video format. Very well done.
Damn, I was wondering where you've been since over half a year, whilst I was stuck in backpropagation😂 and here you came back like a true mind reader. Glad to see you back❤
He was calculating your backward step so you can make your next forward step (sorry, couldnt resist) XD
@@highchillerhe just gave you the right explanation gradient so that you can optimize your learning loss function 😂
I just have to say this goes way beyond the quality of the many chainrule videos I've seen so far. Good job man, you've got some impressive skills to keep me watching a math video and take notes past my usual bedtime
you take notes?
@@marc_frank It's generally a good idea if you are trying to learn. Don't be passive if you want it to stick.
Taking notes, making sketches of the ideas, doing the math are excellent learning techniques. Old timers like me always do that 👍
I should be watching gameplays but here I am procrastinating. Jokes aside, im 13 minutes into the video and astonished by your crystal clear explanations and the quality of your material, this is gold.
That was an outstanding explanation. Your ability to explain higher mathematical concepts in such simple terms is really an amazing service to the rest of us who wanna understand these subjects but don’t have a mathematics degree. Thank you.
This is the best resource I have found to learn backpropagation. The visualization of each concept made this very clear. I can't even imagine the amount of effort you have put into this video.
This is a visual masterpiece! Well done!
Much of this was a review for me as I took the time to go through all this last year. I did an implementation of the MNIST handwritten number neural network and had to learn all the calculus covered here to work out the backpropagation math. You really do have to dig in to it to get a good handle on it but it's fun stuff.
There could not have been a better explanation. Hats off to you
Absolutely one of the best videos explaining data points and regression formulas I have ever seen. Amazing work
in simple words, backpropagation is method of finding gradient descent. to minimising the losses their is need to find the correct values of each function and it is get by chain rule which is based on darivatives and calculus. it's direction in backward direction hence it is known as backward propagation.
still so much confusion in mind regarding this process.
video is very useful and editing is extraordinary.
This video explains the mathematical base of neural networks in a way I understood it the frist time enough to be able to explain it to somebody else. Thank You for that. I can't even imagine how much work you put into the animations. A master piece!
Excellent explanation!! You have done a selfless service to humanity.
Dude, this is the most beautiful ML video i've ever seen. Highly informative yes, but also beautifully made. Thank you for your work.
This video would have saved me so many days that I have spent on researching backpropagation 2 years ago
Animation is great, but more and more people are doing it now. What make this special is the story, the complexity build-up is perfect and efficient. One needs a deep understanding of the subject and strong teaching skills to produce this.
As a student in this business, who has passed through a bunch of professors, I can say with confidence! With this trader, you will both learn and earn and, importantly, receive advice. Everything is competent and clear, without a bunch of any unnecessary movements! Keep up the good work!🤣
The best and most understandable explanation I have ever seen. You explained the essential basis of Artificial Neural Networks so beautifully. I really congratulate you
Most Comprehensive Explanation EVER
my opinion : better than
3b 1b, No offence to 3b 1b Hes great at it and one of the pioneers who did these kind kf visual explanations.
But i like your explanation as it is slow paced & comprehensive
Yeah 3b1b definitely deserves respect from me, but I think he will to recognize this video is very carefully done.
I like that these people just care about the truth and the perfection, and even with a little bit of envy, care about the best product being done.
I just watched this video after completing my first lecture of Deep Learning on Backpropagation and Gradient Descent.
thanks man! appreciated. Really solid content.
Guru of Fundamentals. I can't resist subscribing to your channel and watch all of your videos. The way you explained Chain Rule : the logic behind it is awesome. I am trying to visualize the Quotient Rule of Derivatives in your way. A good Teacher always makes you THINK 🙏
Protect this guy at all cost please
This is the best ever explanation I have seen. Thanks for taking the time and doing something extraordinary.
I wish the Chain Rule was explained in this manner when I was in university. I understood how to do it on paper just fine, but this explanation makes the reasoning behind it make complete sense.
The best explanation about Deep Learning. Grateful.
If you couldn’t understand this explanation, visualization, clearness … there’s nothing else can work with you I swear
Very good video, very well explained. But there is one problem you didn't mention. When training very deep neural networks and using a sigmoid or tanh funktion as the activation funktion, Backpropagation loses it's "powers". The learning prozess becomes extremely slow and results are suboptimal. One of many solutions to this is a ReLU or a ELU funktion in the hidden layers instead of Sigmoid or Tanh. And also how we initialize our weights at the beginning. For example the He-Initialiazation...
Wow wow wow wow! From what I gather here, the key is in understandng ML predictions is that we are looking to fit the function f(x) = b + k1x + k2x^2 + k3^x^3 + k4x^4 + k5x^5. The machine just turns the dial until it finds the best fit using function such as mae or mse. So this is why ML needs so much GPU power then! I'm mind blown, in case you didn't notice the wows earlier. :) Thank you so much for this.
Well, kind of. In ML in general we are not fitting that exact function. We can fit any function and those functions in real deep learning models are very complex.
@@szef_fabryki_azbestu The function above is the only function that is gradually adjusted by the stochastic gradient descent(SGD). Watch the course again. The weights and bias that the SGD is attempting to determine are those of the above function. They are used to make predictions in Deep Learning. You're confusing concepts here. Think again please.
@@BijouBakson Unfortunately you are confusing concepts. That's not how this works. Sure in that particular example we are optimizing to get parameters for that particular function. But that just a simple example.
@@szef_fabryki_azbestu It seems like we are stuck in a back and forth turns, accusing each other of confusion. Maybe you are right! I mean, you could be Einstein for all I know. So... Please help then, in summary, what do they refer to when they say weights and bias? How do you understand it?
@@BijouBakson Weights and biases in NNs are parameters a and b for linear functions:
y = a*x + b
That's for 1 neuron. For one layer of neurons we can write it as a matrix multiplication and vector addition:
Y = A*X + B
On top of those functions we usually apply some non-linearities like ReLU, tanh, sigmoid and so on. In classical multilayer feedforward networks we stack those layers on top of each other e.g. f(g(h(x))). Example fully connected network with 3 layers and tanh as activation function can be written as:
Y = A_3*(tanh(A_2*(tanh(A_1*X + B_1)) + B_2)) + B_3. Here we have weights in matrices A_1, A_2, A_3 and biases in vectors B_1, B_2, B_3. So no, in general case when we train NN we do not fit f(x) = b + k1x + k2x^2 + k3^x^3 + k4x^4 + k5x^5 function. Of course you can say that you just meant polynomial approiximation of functions (Taylor expansion) but you explicitly mentioned only that particular function f(x) that is polynomial of order 5 and what's more neural networks are universal functions approximators so they can approximate any function to any degree of accuracy but only if the network's activation function is not a polynomial.
31 years now, had like 13 years of math in school and another 5 years at university, first time i really understood how derivatives work, bcs visualisation instead of "you calculate it this way and derive it that way, now memorize"
May I ask which university you went to?
@@ArnaudMEURET Sorry, but I don't believe anybody who have no idea what a tangent line of a function in point x looks like, and what it means (despite milion of excersises teaching you it's meaning), and dozens of graphs in literally every workbook, could actually go through 5 years of math related subject. This guy is straight up lying or trolling.
@@WsciekleMleko exactly. People shit on school because "muh education system bad" and forget about all of the interesting stuff they actually teach.
No, you're not heroes trying to fight against the big bad. You're just lazy and want an excuse to keep being one.
Um, how long did it take you to graduate grade school? 13 years of math! Even if you did start learning maths in PK or K to 12, it’s basic mathematics until about grade 4 when you’re starting to build off the foundation of that such as algebra, geometry etc
In other words, the maths ain’t mathing!
I have been doing ML research for a few years now but somehow I was drawn to this video. I am glad to say that it did not disappoint! You have done an amazing job, putting things in perspective and showing respect to calculus where it is due. We forget how a simple derivatives powers all of ML. Thank you for reminding that!
Thank you! That’s really nice to hear!
I cannot tell how much excited this video has got me once I realized I am understanding every single step effortlessly.😂😂😂
Thanks so much for the explanation.
God bless you!🙏🙏🙏🙏🙏🙏
Where you were so far.... I've been trying to understand this concept from past 2 years, and now it's cleared after watching this video. Honestly, my maths was not up to the mark. After seeing your video so many important concepts are cleared.
Don't have enough words to thank you.... God bless you.
I'll share your videos with my friends. Please keep it up.....🙏🙏🙏🙏🙏🙏🙏🙏🙏
i just made that in python for a simple quadratic equation.....THANK YOU !!!! i just learned python and machine learning !!!!!!!!!!
Using desired y=0 i could also find one solution of the equation... wow i love this so much!!
The only different i did was to make x the weight and not the coeficients which i wanted them to be fixed inputs
What you helped me realise is that any system that can put in a computational graph like that 30:04 ...it can be embeded backpropagation regardles
THANK YOU im out of words
Also when the next loss is bigger or equal than the preview loss after one iteration... i divided the learning rate by a factor of 2 or 10 for more accuracy and if the next loss was smaller than the preview one i multiple the learning rate by a factor of 1.1 to 1.5 to speed up the proccess...thus having results in hundreds or even thousands less generations/iterations and less time consuming!!!!!
I can use this for optimizing my desired outputs in any system !!! JUST WOW!!
Finally a solid explantation of backpropagation. Thank you!!
Man, you really nailed it, especially the Computational Graph and Autodiff part. I heard so many times about them in lectures on Stanford and others. However, this was impressive.
In traditional statistics (which preceded machine learning by many decades) the "loss function" was called the "deviance" or "the variance"
This is incredibly well done and helped me visualize derivatives comprehensively. Thank you.
One of the best visual explanations of the backpropagation algorithm I've seen! The animations are really good.
Sure that it was the back propagation algorithm?
simply the best presentation on the subject
This has to be the best explanation of the chain rule ever! Thanks
Watching this video was like a breath of fresh air after some heavy math calculations! The visual explanations really helped solidify my understanding of backpropagation. I appreciate how clear and easy to follow the graphs were. Keep up the fantastic work! Can't wait for more graphic doses like this.
this is the most intuitive video I have ever come across. Amazing work!!!!!
He is back! Greetings from Brazil, we've all been waiting for this release!
Hands down the best explanation there is to backprop
So clear and concise! Thank you for creating this.
Excellent explanation of back-propagation, the building block of machine learning. Thanks a lot.
I cannot imagine just how much effort and work this took to make.
I think I just found my favourite channel of all times.
I've been on YT since 2011 and never had a crush for a YT channel before today é.è
The world needs more of you bro
Beat graphical experience with a clear information, Really enjoyed throughout the video !!!
Very well explained how backpropagation and how the loss function helps in determining the optimal minimum by using calculus, great detail which helps newbies like me understand this complex topic much better.
Hands down the best explanation I have seen so far! So clear and easy to understand!!
So much effort, in this video, the quality of the content at the same level of 3B1B, keep it going man.
bro im 2 minutes in and your graphics are insanely good I can already tell this is going to be a treat. Holy smokes man I'm having a graphicgasm
The best explanation of machime learning i have ever seen on you tube ,amazing work .thank you👍
This video has an amazing and easy-to-understand explanation of the basics of Calculus. Many Thanks to the Creator 🙏🏼
Best description on the topic on the internet!
Magnificent work, from the beautiful, creative, elegant design, to the mastery in teaching. Thank you!
This is the best video about this topic. Learned a lot of things. Took me 2 or more hours but I understand it now. Thank you!
Amazing how you can explain it so well, so simply. You have a subscriber !
What an amazing video. I hope one day they come up with some world prize for 'free education heroes'. 173k views for a video like this is simply disgusting. This guy deserves maybe 2 billion views. God damn it, that makes me mad.
Ehm, ya do realize this flies over the head of most people, you'll have to stack thousands up to find one person who is interested and can understand this properly. It is also not really needed for a plumber or a bakery cashier to understand ML improvement/approach velocity which is what I'd call this in a sense. Or, a visual way to pick a good method for it.
Some people just want to see the world learning. Great Video Artem!
Another gem of a video, well done Artem!! This channel deserves 1M+ subscribers, there's nothing else like it on RUclips.
Subscribed after just watching five mins.. 😊
This is one of, if not the, best videos I’ve seen that throughly explains back propagation. It will definitely help me to be able to better explain the algorithm to others, so thank you for creating it.
That was fire bro! Gonna have to rewatch to understand the back step, but a lot clearer than most videos
Excellent presentation. You made it let from basic calculus, machine learning is just one simple step. What would be interesting is - what are the theoretical underpinnings of this method? When do we say learning is successful? What is the computational complexity of neural networks?
27:27. It clicked here.
Seriously amazing video. Honestly, all your videos are.
Thank you so much.
I'd love to see more videos relating to any relationships between artificial neural networks and biological neural networks
The only relationship is that ANNs store something and brain NN also store something. That's it. The analogy ends here. Everything else is completely different =)
WOW!!! The amount of animation you have made is just incredible. I would really like to not know about backprop again in order to fully appreciate this video!
That's the most amazing way of explaining such hard things to understand
Glad to see ML related video from you ! As you have neuroscience background I would love to see some video that compare the current state of the art architecture work in ML with some of the inner working of the brain. For exemple if there are any structure in the brain with some ressemblance with GPT/transformers architecture, even thought the brain is light-years away I think that could be interesting :)
Your approach to trading is truly impressive. Thank you for teaching me so much!
Wow amazing thank you. Ive read and watched many videos on this topic and this is the one where I finally "got it"
As soon as I saw this video, I knew it was going to be the best of this kind on the Internet. And it was. Fantastic video!
I think this video alone made all my Calculus I and II classes make sense now
This is just superb, thank you Artem! Timing couldn't be any better as the gradient descent algorithm was mentioned in Grahaene's "How We Learn" which I'm currently reading.
Some parts were really hard, can't deny that. Thank you for your work, it is amazing. How are you able to be so confident with these concept while being still at PhD level?
I'm curious: Are your video editing skills superior, or do your tech skills take the lead? Your expertise is remarkable! I'd love to see a video on Transformer from you.
this is the only thing I never understood, I hope to finally understan it. I's weird how this video gets recommended just as I wanted to google about backpropagation
Excellent explanation - I already understood this conceptually but this video gives a very good intuition for the repeated chain rule application
this video has amazing animations. You/your team clearly have a very high attention to details
Wow, hats off to you! Can't even imagine how long it takes to make something like this
Waiting patiently for the second video 🫰♥️. Much love from Kenya, thank you for making me understand back propagation. Started watching your channel because of Obsidian, stayed for the AI lessons 🫰.
Lucid explanation...I am yet to get my head fully around all of it but if I review this a couple of more time I am sure I will...thanks for this it has rekindled some interest in basic math...
I was reminded of 3Blue1Brown channel when I was watching this...