Back Propagation in training neural networks step by step

Поделиться
HTML-код
  • Опубликовано: 5 сен 2024

Комментарии • 129

  • @oussamaoussama6364
    @oussamaoussama6364 Год назад +50

    This is the best video on YT that I know of, that explains back propagation and gradient descent clearly, I've tried so many but this one is by far the best. Thanks for putting this together.

  • @user-ph3qi5to4r
    @user-ph3qi5to4r 10 месяцев назад +5

    I have seen so many yt videos these days about backprop & gradient descent, you r the clearest among them, including a video with millions of watch. This video deserves more exposure. Thank you!

  • @user-ou7dq1bu9v
    @user-ou7dq1bu9v 4 месяца назад +2

    After 2 years since publishing, your video is still a gem 💥

  • @marcusnewman8639
    @marcusnewman8639 Год назад +4

    I am doing my bachelor in insurance mathematics, and one of my tasks were to model a forward neural network. Had no clue what it was. Watched 50 minutes of this guy and now I understand everything. Really great videos!

  • @k1b0rg_1
    @k1b0rg_1 24 дня назад

    Way more better than all these videos with fancy graphics. Keep it simple, that's the way to understand complex things

  • @xintang7741
    @xintang7741 10 месяцев назад +3

    Genius! Better than any professor in my school. Most helpful lecture I have ever found. Thanks a lot!!!!!!!

  • @msalmanai62
    @msalmanai62 2 года назад +4

    Very clear and easiest explanation. Not only did you explain backward propagation in easy way but you also clarified a lot of other concepts as well. Thanks and a lot of love ❤❤❤

    • @bevansmithdatascience9580
      @bevansmithdatascience9580  2 года назад +1

      Glad it was helpful Muhammed! Please do comment on what other concepts you were helped on. Thanks

  • @carlhopkinson
    @carlhopkinson Год назад +3

    You really have a talent for explaining this difficult subject cleary so that it makes sense and links up to the intuitive notions.

  • @ScampSkin
    @ScampSkin Год назад +2

    There are a lot of nice looking videos on bp, but this one finally makes it clear that there is not so much dependency on all previous neurons rather than on last layer only. It was intimidating and overwhelming to think that I have to keep track of all neurons and their derivatives, but now it is clear that i can do everything one step at a time. I might sound chaotic and incoherent, but I'm just so excited to finally find a video not too simple and not too heavy on math notation, yet still it makes things clear

  • @MrFindmethere
    @MrFindmethere Год назад

    One of the best videos covering the whole scenario with an example not just part of the process

  • @wagsman9999
    @wagsman9999 Год назад +1

    This is one the clearest explanation on back propagation I’ve come across. Thanks!

  • @MultiNeurons
    @MultiNeurons Год назад

    Finally a very well done job about back propagation

  • @luisreynoso1734
    @luisreynoso1734 4 месяца назад

    This is the very best video on explaining Back Propagation! It is very clear and well-designed for anyone needing to learn more about AI. I look forward to seeing other videos from Bevan.

  • @user-vc6uk1eu8l
    @user-vc6uk1eu8l Год назад +1

    I am completely amazed!! The clarity of explanation is at the highest level possible! Thank you very much, Sir, for this video and for all the efforts you put to make it so clear! Such a great talent to explain complex ideas in a clear and concise manner is indeed very rarely seen!

  • @sgrimm7346
    @sgrimm7346 11 месяцев назад +2

    Just subbed to your channel because of the extremely clear explanations. Backprop has always been a sticking point for me due mostly due to the fact no one else is willing to get down from their 'math jargon' throne and actually explain the variables and functions in human language, You, Sir, are a gem and deserve all kudos you can get. Years ago I wrote a couple ANN programs with BP but I didn't understand it, I just wrote the calculations. Now, I can't wait to try it out again with a new understanding of the subject. Thank you again.

  • @Abdul-Mutalib
    @Abdul-Mutalib Год назад +1

    _What a great teaching! The way you have explained all the nuts and volts, It's just amazing._

  • @asaad3138
    @asaad3138 8 месяцев назад

    By far this is the best explanation. Clear, precise, detailed instructions. Well good and thank you so much 🙏

  • @karthikeyanak9460
    @karthikeyanak9460 2 года назад

    This is the best explanation of back propagation I ever came across.

  • @faisalrahman3608
    @faisalrahman3608 2 года назад +10

    You have got the skill to explain ML to even an 8-year-old.

  • @wanna_die_with_me
    @wanna_die_with_me 4 месяца назад

    THE BEST VIDEO FOR UNDERSTANDING BACK PROPAGATION!!!! Thank you sir

  • @magnus.t02
    @magnus.t02 Год назад

    Best explanation of gradient descent available. Thank you!

  • @kumardeepankar
    @kumardeepankar 10 месяцев назад

    The best explanation on back propagation. Many thanks!

  • @nawaldaoudi2625
    @nawaldaoudi2625 10 месяцев назад

    The best video I've seen so far. Such a clear and concise explanation. I finally understood in a smoothy way the different concepts related to the Back and forward propagation. I'm grateful

  • @sisumon91
    @sisumon91 6 месяцев назад

    Best video I have found for BP! Thanks for all your efforts.

  • @Pouncewound
    @Pouncewound Год назад +2

    Amazing video. I was really confused by other videos but yours really explained every bit of it simply. thanks!

  • @techgamer4291
    @techgamer4291 5 месяцев назад

    Thank you so much , Sir.
    Best explanation I have seen on this platform .

  • @DanielRamBeats
    @DanielRamBeats 11 месяцев назад

    I had to pause the video at the point you mentioned the chain rule and go back to learn calculus, I took Krista Kings's calc class on Udemy, I am finally back to understand these concepts! 1 month later :)

  • @robertpollock8617
    @robertpollock8617 9 месяцев назад

    EXCELLENT!!!!!!!!!!!!!!!!JOB WELL DONE!!!!!!!!!!!!!!!!!!!!!!!!!!!!Wish you would make a video on batch gradient descent.

  • @rajFPV
    @rajFPV Год назад +1

    Just Beautiful !
    Your math notation combined with your skill of teaching just made it so simple! Forever indebted. Thank you so much

    • @sksayid6406
      @sksayid6406 8 месяцев назад

      Please teach us more. It was a great explanation, and you made this too easy for us. Thanks lot.

  • @PLAYWW
    @PLAYWW 8 месяцев назад

    You are the RUclipsr I have met who can explain all the specific calculation processes clearly and patiently. I appreciate you creating this video. It helps a lot. I wonder if you can make a video about Collaborative filtering?

  • @Be1pheg0r_
    @Be1pheg0r_ 10 месяцев назад +1

    amazing video. well explained and easy to understand. by far the best thing I have found for now that explains everything with good, supportive visuals

  • @depressivepumpkin7312
    @depressivepumpkin7312 8 месяцев назад

    Man, this is at least the 15th video on the topic I watch, including several books, related to the back propagation, and this is the best one. All previous videos just skip a lot of explanation, focusing on how this backpropagation is important and crucial and what it allows to do, instead of doing the step-by-step overview. This video contains zero bs, and only has clear explanations, thank you

  • @karthikrajeshwaran1997
    @karthikrajeshwaran1997 6 месяцев назад

    thanks so much for the clarity. helps tremendously! lvoe this.

  • @isurusubasinghe2038
    @isurusubasinghe2038 2 года назад

    The best and simplest explanation ever

  • @DanielRamBeats
    @DanielRamBeats Год назад

    you are an amazing teacher, thank you for taking the time to create and to share your knowledge with us. I am grateful,

  • @minerodo
    @minerodo 10 месяцев назад

    I really appreciate this video!! Believe me, I have been looking in books and in other videos, but this is the only one that tell the entire story on a very clear way (besides Stack Quest channel)! thanks a lot!! god bless you!

  • @farnooshteymourzadeh8874
    @farnooshteymourzadeh8874 Год назад

    well clarifying, not too long not too short, just enough! really thanks!

  • @ivannsoita4137
    @ivannsoita4137 29 дней назад

    Your calculations(values) for W4 are incorrect so the updated value of W4 is incorrect. The correct updated value should be approximately 37.16. Your value is 49.5.
    Apart from that, thanks so much for your explanations. First time in years that I have fully understood the step by step logic of this operation. Much appreciated.

  • @linolium3109
    @linolium3109 Год назад

    That's such a good video. I had these in a lecture and don't understand anything. That was really a relief for me. Thank you for that!

  • @markuscwatson
    @markuscwatson 9 месяцев назад

    Great presentation. Good job 👍

  • @user-ts5vd9fp1g
    @user-ts5vd9fp1g 5 месяцев назад

    This channel is so underrated...

  • @gowthamreddy2236
    @gowthamreddy2236 2 года назад

    My gosh... The clarity is amazing... Thanks Bevan

  • @shobhabhatt3602
    @shobhabhatt3602 2 года назад

    Thanks a lot for such a video. Simplest, easy, and thorough explanation for both beginners as well as advanced learners.

  • @rahuldevgun8703
    @rahuldevgun8703 3 месяца назад

    The best i have seen till date .. superb

  • @Samurai-in5nr
    @Samurai-in5nr 2 года назад +1

    The best and simplest explanation ever. Thanks man :)

  • @zarmeza1
    @zarmeza1 10 месяцев назад

    this is the best explanation i found, thanks a lot

  • @adiai6083
    @adiai6083 2 года назад

    Very simple and clear explanation ever

  • @Terra2206
    @Terra2206 Год назад

    I was reading a book about neural networks, so in one of the last steps i have a great question about how a number was calculated, so i got frustrated, the book had a error, and i could find it thanks to this video, thanks a lot, very good explanation

  • @LakshmiDevilifentravels
    @LakshmiDevilifentravels 7 месяцев назад

    Thank you so much. it was so simple

  • @karthikrajeshwaran1997
    @karthikrajeshwaran1997 6 месяцев назад

    just outstanding - re watched it and it made it so clear!

  • @sma92878
    @sma92878 5 месяцев назад

    This is amazing, so clear and easy to understand!

  • @danjohn-dj3tr
    @danjohn-dj3tr 11 месяцев назад

    Awesome, clearly explained in simple way👍

  • @samuelwondemu6972
    @samuelwondemu6972 Год назад

    best way of teaching.Thanks a lot

  • @VladimirDjokic
    @VladimirDjokic 10 месяцев назад

    Excellent explanation Thank you ❤

  • @kennethcarvalho3684
    @kennethcarvalho3684 7 месяцев назад

    Finally i understood something on this topic

  • @mychangeforchange7946
    @mychangeforchange7946 Год назад

    The best explanation ever

  • @abdelmfougouonnjupoun4614
    @abdelmfougouonnjupoun4614 Год назад

    Thank you so much for such an amazing explanation, the best I ever saw before.

  • @lawrence8597
    @lawrence8597 2 года назад +1

    Thanks very much, God bless you.

  • @mlealevangelista
    @mlealevangelista 2 года назад

    Amazing. I finally learned it. Thank you so much.

  • @KolomeetsAV
    @KolomeetsAV Год назад

    Thanks a lot! Really helpful!

  • @jesuseliasurieles8053
    @jesuseliasurieles8053 11 месяцев назад

    Sr awesome job explaining this amazing topic

  • @debjeetbanerjee871
    @debjeetbanerjee871 2 года назад +1

    best explanation over the internet..........could u please make videos on the different activation functions that u mentioned(tanh and the ReLU)..........it would be really nice of you !!

  • @levieuxperesiscolagazelle2684
    @levieuxperesiscolagazelle2684 Год назад

    God bless you thank you ,you saved my life !!!!!

  • @alinajokeb6930
    @alinajokeb6930 2 года назад

    Every learning channels should follow you. Your learning method is amazing sir, also the last questions are actually in my mind but you cleared it.. Thank alot 🌼

  • @dallochituyi6577
    @dallochituyi6577 2 года назад

    Absolutely enjoyed your explanation. Good job sir.

  • @user-xg1cj7wh1m
    @user-xg1cj7wh1m 9 месяцев назад

    Mister, you have saved my life lol, thank you!!!

  • @abdulhadi8594
    @abdulhadi8594 Год назад

    Excellent SIr

  • @PrashantThakre
    @PrashantThakre 2 года назад +1

    you are great .. thanks for such amazing videos

  • @Zinab8850
    @Zinab8850 2 года назад

    Your explanation is fantastic!! Thanks

  • @destinyobamwonyi8865
    @destinyobamwonyi8865 20 дней назад

    Thanks a lot .

  • @yazou4564
    @yazou4564 10 месяцев назад

    well done!

  • @ywbc1217
    @ywbc1217 11 месяцев назад

    YOU ARE REALLY THE BEST ONE 🤗

  • @tkopec125
    @tkopec125 Год назад

    Finally! Got It! :) Thank You Sir very much

  • @1622roma
    @1622roma Год назад

    Best of the best. Thank you 🙏

  • @MrSt4gg
    @MrSt4gg Год назад

    Thank you very much for the video!

  • @Amit-mq4ne
    @Amit-mq4ne Год назад

    great!! thank you

  • @DanielRamBeats
    @DanielRamBeats 11 месяцев назад

    I had gotten confused on the notion of "with respect to x" which after some studies means, when you take the derivative of a multi-variable function, you only differentiate the x and keep y exactly the same. God this learning it taking forever! :/

  • @AlperenK.
    @AlperenK. 11 месяцев назад

    Awesome

  • @alonmalka8008
    @alonmalka8008 8 месяцев назад

    illegally underrated

  • @Richard-bt6uk
    @Richard-bt6uk 5 месяцев назад

    Hello Bevan
    Thank you for your excellent videos on neural networks.
    I have a question pertaining to this video covering Back Propagation. At about 14:30 you present the equation for determining the updated weight, W7. You are subtracting the product of η and the partial derivative of the Cost (Error) Function with respect to W7. However, this product does not yield a delta W7, i.e., a change in W7. It would seem that the result of this product is more like a delta of the Cost Function, not W7, and it is not mathematically consistent to adjust W7 by a change in the Cost Function. Rather we should adjust W7 by a small change in W7. Put another way, if these quantities had physical units, the equation would not be consistent in units. From this perspective, It would be more consistent to use the reciprocal of the partial derivative shown. I’m unsure if this would yield the same results. Can you explain how using the derivative as shown to get the change in W7 (or indeed in any of the weights) is mathematically consistent?

  • @Koyaanisqatsi2000
    @Koyaanisqatsi2000 Год назад

    Great content! Thank you!

  • @giorgosmaragkopoulos9110
    @giorgosmaragkopoulos9110 5 месяцев назад +1

    So what is the clever part of back prop? Why does it have a special name and it isn't just called "gradient estimation"? How does it save time? It looks like it just calculates all derivatives one by one

    • @bevansmithdatascience9580
      @bevansmithdatascience9580  5 месяцев назад

      it is the main reason why we can train neural nets. The idea in training neural nets is to obtain the weights and biases throughout the network that will give us good predictions. The gradients you speak of get propagated back through the network in order to update the weights to be more accurate each time we add in more training data

  • @user-yh3lm7nw1f
    @user-yh3lm7nw1f Месяц назад

    Best

  • @surojitkarmakar3452
    @surojitkarmakar3452 2 года назад

    Atlast i understood 😌

  • @chillax1629
    @chillax1629 Год назад +3

    Believe you used an learning rate of 0.001 not 0.01. Or you did use an learing rate of 0.01 and are having an error in the updated w7 as this is 12.42 and not 12.04.

  • @StudioFilmoweAlpha
    @StudioFilmoweAlpha 7 месяцев назад +1

    22:53 Why Z1 is equal to -0,5?

  • @dandan1364
    @dandan1364 Месяц назад

    Did I miss updating the biases? I think you only showed the partial derivative chain for weights. Not biases.

  • @neinehmnein5701
    @neinehmnein5701 2 года назад +2

    Hi, thanks for the video! This question might be a little stupid, but aren´t the weight updates in 24:27 wrongly computated? shouldnt the first value be 0,29303 rather than 193?
    thanks for an answer!

    • @bevansmithdatascience9580
      @bevansmithdatascience9580  2 года назад

      Thanks for the message. It could be that I have made a mistake. But I hope that you can understand the principle of how to do it. Thanks

    • @waleedbarazanchi4503
      @waleedbarazanchi4503 2 года назад

      It is correctly mentioned. You may confused 19,303 with 19.303.Best

  • @DanielRamBeats
    @DanielRamBeats 11 месяцев назад

    Ugh, I am slowly getting it. First you take the derivative of the cost function with respect to Ypred, which is 2 times the difference between Ypred and Yact and then you multiply that by the partial derivative of Yppred with respect to w7. which is just g1. Then you multiply them together to obtain the partial derivative of the cost function with respect to w7. Then you take that partial derivative multiply that by a learnign rate and then subtract that from the original value of w7, this then becomes the new value of w7.
    I am still confused on the next part, calculating the partial derivatives of the hidden layers, but hey that is some progress so far, right? :/

    • @Luca_040
      @Luca_040 8 месяцев назад

      And how to calculate the first partial derivative?

  • @apppurchaser2268
    @apppurchaser2268 Год назад

    Great explanation thanks. But i think 12 - 0.01 * 42 .2 is 11.578 and not 12.04. @15:37 by the way amazing job well concept explanations 🙏

    • @Kevoshea
      @Kevoshea Год назад +1

      watch out when multiplying two minus numbers.

  • @effortlessjapanese123
    @effortlessjapanese123 5 месяцев назад

    haha South African accent. baie dankie Bevan!

  • @ammarjagadhita3189
    @ammarjagadhita3189 5 месяцев назад

    i just wondering in the last part when i try to calculate partial derivative of w4 the result i got is -3711 but in the video it is -4947. then i make sure so i changed the last equation part to x1 (60) and it gives me the same result like in the video which -2783, so im not sure if i miss something since he didnt write the calculation from w4

    • @ivannsoita4137
      @ivannsoita4137 29 дней назад

      You are right. He has an error somewhere in his calculations.

  • @FPChris
    @FPChris 2 года назад

    So you do a new forward pass after each X back propagation? X1 forward , back...X2 forward, back...X3 forward, back.

    • @bevansmithdatascience9580
      @bevansmithdatascience9580  2 года назад

      Please check out my video on What is an epoch? That should clear things up. If not, ask again.

  • @wis-labbahasainggris8956
    @wis-labbahasainggris8956 3 месяца назад

    Why does weight updating use a minus sign, instead of a plus sign? 24:34

    • @bevansmithdatascience9580
      @bevansmithdatascience9580  3 месяца назад

      In gradient descent we want to tweak the weights/biases until we obtain a minimum error in our cost function. So for that we need to compute the negative of the gradient of the cost function, multiply it by a learning rate and add it to the previous value. This negative means we are moving downhill in the cost function (so to speak)

  • @Princess-wq7wk
    @Princess-wq7wk Год назад

    How to update b1 , I don’t know how to update it

  • @codematrix
    @codematrix Год назад

    Hi Bevan, I plugged your original weights and biases and got 24.95, which is correct, using inputs 60, 80, 5. I then entered all your modified values at frame 24:39 and got 42.19. I was hoping to get very close to ~82. Are you sure that you applied the local minimum value during gradient decent?

    • @bevansmithdatascience9580
      @bevansmithdatascience9580  Год назад +1

      It could be a mistake on my part. I unfortunately don't have time now to go back and check. However, the important thing is that you understand how it all works. Cheers

    • @codematrix
      @codematrix Год назад

      @@bevansmithdatascience9580 I think I need to re-forward pass the features back into the NN with the adjusted values and recalculate the cost function until it reaches an acceptable local minimum. I’ll give that a try.

  • @YAPJOHNSTON
    @YAPJOHNSTON Год назад

    how does b1 and b2 calculate ?

  • @orgdejavu8247
    @orgdejavu8247 2 года назад

    g1/z1 okey dz1 is the derivative of the sigmoid function but what happens with de dg1 in the numerator

    • @bevansmithdatascience9580
      @bevansmithdatascience9580  2 года назад

      Please check 20:50 Let me know if that helps?

    • @orgdejavu8247
      @orgdejavu8247 2 года назад

      @@bevansmithdatascience9580 I made a bad question what I meant is what happened to dellZ1 , why does it disapper? why dellg1/dellz1 equals to only the derivative of the sigmoid function

  • @batuhantongarlak3490
    @batuhantongarlak3490 2 года назад

    besttt

  • @joyajoya4674
    @joyajoya4674 Год назад

    😍😍