Artificial Neural Networks (Part 3) - Backpropagation

Поделиться
HTML-код
  • Опубликовано: 1 окт 2024

Комментарии • 73

  • @meshackamimo1945
    @meshackamimo1945 10 лет назад +8

    No human alive has ever tutored this abstract topic as good n simplied it as you. Kudos! Were it not for you, I would have long given up on this topic.I keep watching the video over n over , again n again, and I don't get bored..it inspires me to believe that maths n science are deemed hard due to poorly written textbooks for beginners. Wish I could be taught by you for just a semester! I envy your students. Your just too gud,marvellous!
    Keep it up!

  • @liangfage4247
    @liangfage4247 8 лет назад +14

    Thank you very much for the tutorials. Most tutorials on this topic are all trying to bury me with formulas and math written poorly without examples. They succeeded, until I finished these three videos.

  • @bestalouch
    @bestalouch 9 лет назад +8

    Man you have really made the best tutorial video in Whole RUclips.
    You have really saved me :)
    Thanks for your efforts

    • @homevideotutor
      @homevideotutor  9 лет назад +1

      Ali Younes Thanks Ali Younes. It is my pleasure. Currently putting up another site with simple images instead of videos. scholastic-images.webs.com
      .

    • @meriemallab8511
      @meriemallab8511 8 лет назад

      +homevideotutor thank you very much sir , the best explanation ever (y)

  • @BerkayCelik
    @BerkayCelik 10 лет назад +4

    Thanks. i have a question, you have calculated for a particular input. What if we have more inputs how the algorithm work then? for each input do we update the weights only once or given error threshold we think its acceptable, and stop. Then, do we feed to next input into the network? Another point is the multiple outputs? It may be multiclass problem, then how we solve this problem?

    • @jimmorrison6613
      @jimmorrison6613 10 лет назад

      I have the same same question, what happen How it works for multiple inputs and outputs?

  • @chinthakaudugama2792
    @chinthakaudugama2792 8 лет назад +1

    Thank you, It is help lots, Can you add another lesson about Matlab nntool

  • @wesleyshann6524
    @wesleyshann6524 8 лет назад +2

    I have been looking for tutorials on the internet for a while, but almost everything i looked into was confusing and hard to understand, then i found your tutorial ( this video and also video 2, didn't saw video 1 yet), everything turned to be more simple ( a lot of calculation and iterations, but very easy to understand the way you explained). Well, thank you very much for provide this learn material and have my like ^^

    • @homevideotutor
      @homevideotutor  8 лет назад

      Dear Shann, Many thanks for the great positive feedback!!! ruclips.net/video/h2w8LueoQi8/видео.html

  • @devarshambani1969
    @devarshambani1969 8 лет назад +1

    thankyou for making this concept clear and understandable by real examples helps a lot man!!

  • @STREETBOYXY
    @STREETBOYXY 7 лет назад +1

    I finally understood how to calculate my hidden layers. Thank you very much for the inspiration :)

  • @sirhuang9360
    @sirhuang9360 5 лет назад +1

    The best techer, it is very clear! thanks so much.

  • @BajoMundoUnderground
    @BajoMundoUnderground 8 лет назад +1

    finally a person that can explain this topic... thank you really nice and clear i think the best part is when u actually do the feedforward and the backprop example thanz again

  • @dougylee5299
    @dougylee5299 9 лет назад +3

    This is the clearest video I've seen!
    No one else does a worked example! Thank you so much! It's helped me heaps!

  • @IsaacCallison
    @IsaacCallison 5 лет назад

    I guess this makes sense if you already know why you are using back-prop.

  • @deftonesazazel
    @deftonesazazel 5 лет назад

    There's a mistake when you substitute the values for deltao1 how do you get this value?

  • @mauricioribeiro3547
    @mauricioribeiro3547 8 лет назад

    github.com/mauricioribeiro/pyNeural/tree/master/3.4 uses this video as example

  • @venkat157reddy
    @venkat157reddy Год назад

    Super explanation..Thank you so much

  • @sebastianpoliak
    @sebastianpoliak 8 лет назад +1

    finally a clear and straight forward explanation, thank you ! :)

  • @praveshkumarsingh6132
    @praveshkumarsingh6132 7 лет назад

    SIR WHAT is mobilty factor

  • @BiranchiNarayanNayak
    @BiranchiNarayanNayak 7 лет назад +1

    The best video tutorial on Neural Networks.

  • @AshrafGardizy
    @AshrafGardizy 8 лет назад

    Thanks Sir for very clear explanation of this tutorial and I am waiting for more videos about Artificial Neural Networks other algorithms, please continue the tutorial and if you can make some video tutorials about Deep Learning it will be very much kind of you.

  •  5 лет назад

    Thanks man

  • @naimali6385
    @naimali6385 9 лет назад +1

    thank you so much.... can you give me the slides of the lectures. for learning purpose. thank you.

    • @homevideotutor
      @homevideotutor  9 лет назад +2

      Naim Ali Slides can be downloaded from scholastic-videos.com/ in PDF format.

  • @someetsingh2224
    @someetsingh2224 7 лет назад

    Dear Sir,
    wonderful video! but what are n and alpha (training and mobility factor)?who decides it and what it signifies, If we don't know these values what default values we can consider?
    hope you will reply to this query.

  • @mehmetakcay9659
    @mehmetakcay9659 8 лет назад

    Thank you for the tutorials. I want to ask a question. I am working on the Artifical Neural Network (ANN) model in the matlab. I have experimental data. I want to use an ANN model for the modeling to the my experimental data. I use the back-propagation learning algorithm in a feed-forward single hidden layer neural network. and I use logistic sigmoid (logsig) transfer function for both the hidden layer and the output layer. I complate my ANN model and I get the weights. Now I want to find results as manuel by using the weights and so I will create a formula. I tried but I could not it. Can you help me this subject, and do you have any document, pdf, or video about this subject. Thaks for your interest..

  • @WahranRai
    @WahranRai 6 лет назад

    why not explaining the backpropagation with gradient descent without momentum .The updated rule is complicated and not easy to understand.

    • @homevideotutor
      @homevideotutor  6 лет назад

      Thank you for the great comment.

    • @WahranRai
      @WahranRai 6 лет назад

      In any case, very good video !!!
      I am waiting for the matrice form of your example, it will be useful to take advantage of matrice computation (in case of many layers and neurons)

  • @aliveli8007
    @aliveli8007 7 лет назад

    This very clearly and basic example for backpropagation .Thanks for this tutor.

  • @jesusjimenez6401
    @jesusjimenez6401 10 лет назад +1

    Very useful!!!!

  • @dadairheren2169
    @dadairheren2169 8 лет назад

    Thank you for this incredible tutorial video. You have just save me from big trouble. The explanations were quite understandable, bravo.

  • @kumarprasant7
    @kumarprasant7 7 лет назад

    Very nice explanation.

  • @adityarawat5063
    @adityarawat5063 9 лет назад +1

    thks man for saving my ass...really good xplanation....

    • @homevideotutor
      @homevideotutor  9 лет назад +1

      Aditya Rawat Thank you for the nice comment

  • @dr.md.atiqurrahman2748
    @dr.md.atiqurrahman2748 3 года назад

    No comments. Just Wow.

  • @ciddim
    @ciddim 7 лет назад +1

    What does the n = 0.25 (eta?) stands for next to the learning rate (alpha) of 0.0001 ?? I don't seem to get it at all. I know this is an old video I don't expect an answer, but it's worth a shot.

    • @homevideotutor
      @homevideotutor  7 лет назад

      Dominic Leclerc it is the learning rate of the ANN. neuralnetworksanddeeplearning.com/chap3.html

    • @ciddim
      @ciddim 7 лет назад

      homevideotutor then what is alpha ! ?

    • @homevideotutor
      @homevideotutor  7 лет назад

      Please refer to FAQ section of scholastic.teachable.com/p/pattern-classification for more information on this. Many thanks for the interest.

  • @shadeelhadik
    @shadeelhadik 7 лет назад

    thank you so much

  • @ANASKHAN786
    @ANASKHAN786 7 лет назад

    Amazing tutorial sir. Thankyou very much.keep making more

  • @akhileshjoshi8484
    @akhileshjoshi8484 8 лет назад +1

    very simply explained.. thnx a ton :)

    • @homevideotutor
      @homevideotutor  8 лет назад

      thank you. please refer to our new Web site for any further additions. scholastic.teachable.com. best regards

  • @mahdishafiei7230
    @mahdishafiei7230 8 лет назад

    thanks so much

  • @Felixantony84
    @Felixantony84 6 лет назад

    1000 Thanks

  • @seprienna
    @seprienna 9 лет назад

    how to use this example in nntool? what training and learning function should i use?

  • @rikzman4
    @rikzman4 7 лет назад

    sorry im still learning and im really stuck on how to get the (exp) value ?? thanks

    • @homevideotutor
      @homevideotutor  7 лет назад

      Thanks for the interest. Please use a calculator and use e^(-v). If for example v=0.4851 then phi(v) = 1/(1 + e^(-0.4851))=0.619 . Hope that helps.

    • @rikzman4
      @rikzman4 7 лет назад

      Thank you soo much... just pass the exam :)

  • @hareharan
    @hareharan 6 лет назад

    excellent teaching sir... thank you so much sir

  • @franzdusenduck42
    @franzdusenduck42 5 лет назад +2

    Nobody has ever explained this topic better than you, my friend. Huge respect!!

  • @zakariazakivic1430
    @zakariazakivic1430 8 лет назад

    thank you very much , its the best i have found in youtube

  • @homevideotutor
    @homevideotutor  11 лет назад

    Pleasure

  • @2427roger
    @2427roger 7 лет назад

    Gracias, al fin entendí esto..!

  • @Titu-z7u
    @Titu-z7u 7 лет назад

    thank u so much.u saved us brother.finally i got a feeling i understand it

  • @artjom84
    @artjom84 9 лет назад

    thank you so much !

  • @hesamrahmati498
    @hesamrahmati498 9 лет назад

    Thank you so much :)

  • @messiasreinaldo4492
    @messiasreinaldo4492 11 лет назад

    Thanks again!

  • @edwinvarghese
    @edwinvarghese 7 лет назад

    @homevideotutor Can you explain the difference between 'n' time step and suffixes of inputs? It's confusing

    • @homevideotutor
      @homevideotutor  7 лет назад

      Suffix in x1 means first feature vector and x2 means 2nd feature vector. w11(n) means the value of the weight w11 at instant of time n and w11(n+1) means the value of the weight w11 at instant of time n+1. Hope that explains.

    • @edwinvarghese
      @edwinvarghese 7 лет назад

      homevideotutor How did you fit 'time' in the context? like in the literal sense? I am totally new to nn. sorry.

    • @homevideotutor
      @homevideotutor  7 лет назад

      That is the beauty of the back propagation. You go one pass forward and then go one pass backward. Each pass backward is one time step. Each backward pass change the weights so that final error value (desired - output or d-y) is reduced in the next forward pass.

    • @edwinvarghese
      @edwinvarghese 7 лет назад

      homevideotutor cool. I think I got it. Thank you very much for replying. Really appreciate it.plus Do you know any good(good and basic as yours) video tutorials/articles of RNN? If yes, can you give me the link? Thanks in advance

    • @homevideotutor
      @homevideotutor  7 лет назад +1

      I did a search in You tube. I do not know how good this is but it looks simple. ruclips.net/video/kMLl-TKaEnc/видео.html If I create one in the future I will let you know. It will be hosted in: scholastic.teachable.com