#1 Solved Example Back Propagation Algorithm Multi-Layer Perceptron Network by Dr. Mahesh Huddar

Поделиться
HTML-код
  • Опубликовано: 29 сен 2024

Комментарии • 209

  • @akinyaman
    @akinyaman Год назад +22

    man l looked many bp explanations and most of them telling of rocket science stuff this is the easy and clear one ever, thanx for sharing

  • @df_iulia_estera
    @df_iulia_estera Год назад

    Awesome explanation! Thanks a lot!

    • @MaheshHuddar
      @MaheshHuddar  Год назад

      Thank You
      Do like share and subscribe

  • @alizainsaleem5533
    @alizainsaleem5533 Месяц назад

    best explanation

  • @tejasvinnarayan2887
    @tejasvinnarayan2887 Год назад +15

    Very clear! How about bias b? What is the formula in case we add a bias?

  • @seabiscuitthechallenger6899
    @seabiscuitthechallenger6899 3 месяца назад

    👍👍👍👍👍👍👍👍

    • @MaheshHuddar
      @MaheshHuddar  2 месяца назад +1

      Thank You
      Do like share and subscribe

  • @joeystenbeck6697
    @joeystenbeck6697 2 года назад +44

    This makes the math very clear. I now know the math and have some intuition, so I hope to fully connect the two soon. Thanks for the great video!

    • @MaheshHuddar
      @MaheshHuddar  2 года назад +2

      Thank You
      Do like share and subscribe

    • @shivamanand8019
      @shivamanand8019 Год назад +2

      It looks like error in ∆wji notation you followed just opposite
      @@MaheshHuddar

  • @safayetsuzan1951
    @safayetsuzan1951 2 года назад +45

    Sir, you deserve a big thanks. My teacher gave me an assignment and i was searching for 2 days on youtube for weight calculation. But finally your video has done the work. It was really satisfying. Thank you sir.

    • @MaheshHuddar
      @MaheshHuddar  2 года назад +4

      Welcome
      Do like share and subscribe

  • @jarveyjaguar4395
    @jarveyjaguar4395 Год назад +8

    I have an Exam in 2 days and your videos just saved me from failing this module.
    Thank you so much and much love from 🇩🇿🇩🇿.

  • @khizarkhan2250
    @khizarkhan2250 Год назад +5

    Guys! Threshold or bias is a tunning parameter you can select something low like 0.01 , 0.02 or high like 0.2 to check error is getting low or not. I hope this will help you.

  • @alperari9496
    @alperari9496 7 месяцев назад +3

    I believe for hidden units, the w_kj in delta(j) formula should have been w_jk. Namely, other way around.

    • @alperari9496
      @alperari9496 7 месяцев назад +2

      And delta(w_ji) should have been delta(w_ij), again the other way around.

    • @steveg906
      @steveg906 5 месяцев назад +2

      yeh i was about to comment the same thing

  • @ushanandhini1942
    @ushanandhini1942 2 года назад +11

    Easy to understand. Give next one is CNN

  • @shubhampamecha9650
    @shubhampamecha9650 2 года назад +2

    And where is the b biased
    There should be some constant also?

  • @BloodX03
    @BloodX03 Год назад +5

    Damn sir, u are the best youtube teacher in AI.. Love youu ma sir

  • @femiwilliam1830
    @femiwilliam1830 Год назад +2

    Good job, but why no account for the bias term before applying sigmoid function?

  • @R41Ryan
    @R41Ryan Год назад +5

    Thank you. I've been trying to implement a reinforcment algorithm from scratch. I understood everything except back propogation and every video on it that I've watched has always been vague until I saw this video. Good stuff!

  • @sajalali8155
    @sajalali8155 Год назад +1

    Use urdu language

  • @horiashahzadi302
    @horiashahzadi302 Месяц назад +1

    Boht zbrdast lecture ha ....boht shukrya....keep it up

  • @ram-pc4wk
    @ram-pc4wk 10 месяцев назад +1

    how are you deriving deltaj formula, u can include derivations of sigmoid functions.

  • @dydufjfbfnddhbxnddj83
    @dydufjfbfnddhbxnddj83 Год назад +2

    how did you update the weights of connections connecting input layer and the hidden layer?

  • @shreyaankrao968
    @shreyaankrao968 Год назад +3

    I got crystal clear understanding of this concept only because of you sir. The flow of video is excellant, appreciate your efforts!! Thank you and keep up the good work !!

  • @oposicionine4074
    @oposicionine4074 Год назад +1

    How do you update de weights if you have more input data?? In this case he only has 1 input, how do you do it with 2 inputs? Do you do the same twice?

    • @animeclub8475
      @animeclub8475 8 месяцев назад +1

      everyone says "thank you", but only a few understand that this video is useless if there are more neurons on one layer. Those who say "thank you" do not even plan to make a neural network

  • @pranavgangapurkar195
    @pranavgangapurkar195 Год назад +1

    in one epoch how many times back propagation takes place?

  • @sowmiya_rocker
    @sowmiya_rocker Год назад +1

    Thanks for the video, sir. I have a doubt. How did you update weights without Gradient Descent (GD) or any other optimization technique, sir? Because I read in blogs that Networks don't get trained without GD and by only using backpropagation. In other words, my doubt is how does the calculation change if we also implemented GD in this? I'm a rookie; kindly guide me, sir.

    • @adityachalla7677
      @adityachalla7677 Год назад

      gradient descent has been used in this video while updating the weights. the change in weights is done through gradient descent. But here he has not written the derivative math.

    • @rohanwarghade7111
      @rohanwarghade7111 9 месяцев назад

      how he got ytarget = 0.5@@adityachalla7677

  • @vnbalaji7225
    @vnbalaji7225 2 года назад +4

    Simple lucid example illustrated. Please continue.

  • @timebokka3579
    @timebokka3579 Год назад

    Can you explain this
    The input of a neuran is 0.377
    Output of the same neuran is 0.5932
    How this happens
    How does the 2.71^-0377 give the answer of 0.6867

  • @pritam-kunduu
    @pritam-kunduu Год назад +1

    You taught Very good. Today is my exam. Your videos were really helpful. I hope I pass well without getting a backlog in this subject. 👍👍

  • @sirknightartorias68
    @sirknightartorias68 2 месяца назад

    It's always the Indians 😂❤

  • @madhusaggi
    @madhusaggi 9 месяцев назад

    Can you please do vedios on Cnn with mathematical concepts..your vedios are much useful and understandable.Thank you

  • @abu-yousuf
    @abu-yousuf Год назад +1

    Great work Dr. Mahesh. Thanks from Pakistan.

  • @ToriliaShine
    @ToriliaShine 6 месяцев назад +1

    thank you! understood the concept smoothly with your video!

    • @MaheshHuddar
      @MaheshHuddar  6 месяцев назад

      Thank You
      Do like share and subscribe

    • @ToriliaShine
      @ToriliaShine 6 месяцев назад

      @@MaheshHuddar did that, question though. for the forward pass, what about biases? they are values with their own weights right. were they just, not included for this example?

    • @ToriliaShine
      @ToriliaShine 6 месяцев назад

      ah nevermind, got it from your next video in this series lol

  • @jinkagamura2820
    @jinkagamura2820 2 года назад +35

    I have a doubt. In many places, I have seen that the error calculation is done using the formula E = 1/2 (y - y*)^2, but you have calculated by using subtraction. Which is correct?

    • @keertichauhan6221
      @keertichauhan6221 2 года назад +3

      same doubt .. what is the correct method?

    • @priyanshumohanty5261
      @priyanshumohanty5261 2 года назад +7

      @@keertichauhan6221 I think there are different methods to compute the error. The one that this guy above has mentioned is mean square error. The one shown in the video is also correct, but using MSE or RMSE, are generally regarded as better measures.

    • @chukwuemekaomerenna4396
      @chukwuemekaomerenna4396 2 года назад +13

      For a multiple data point, you use error function and for single output you make use of loss function.
      Loss function is error = actual - target
      Error function is 1/2(actual-target)square

    • @chukwuemekaomerenna4396
      @chukwuemekaomerenna4396 2 года назад +13

      For a multiple data point, you use error function. Error function=1/2 summation(y actual - y target)^2.
      For a single data point, you use the loss function. Loss function=
      (y actual - y target)

    • @sharanyas1565
      @sharanyas1565 2 года назад

      The errors are made to be positive by squaring.

  • @paul-pp1op
    @paul-pp1op Год назад +1

    Best video explanation on ANN back propagation. Many thanks sir

    • @MaheshHuddar
      @MaheshHuddar  Год назад +1

      Thank You
      Do like share and subscribe

  • @sharanyas1565
    @sharanyas1565 2 года назад +1

    Very clear. Thanks for uploading this video.

    • @MaheshHuddar
      @MaheshHuddar  2 года назад

      Welcome
      Do like share and subscribe

  • @Professor_el
    @Professor_el 5 месяцев назад

    The formulars for delta W only work because of the nature of the actiation function right? if it is a hyperbolic tangent or RELU , the formulas change right?

  • @justenjoy3744
    @justenjoy3744 Год назад

    From which book u have taken problem from?

  • @amulyadevireddy5669
    @amulyadevireddy5669 2 года назад +1

    What about the bias factor??

    • @halihammer
      @halihammer 2 года назад

      He did not add it to keep it simple i guess.
      But you can add the bias by making it a third input. Then the method doesnt change you just have a new (constant) input for each layer. Normaly set to one 1 and the weight associated will act as your bias. since 1*biasWeight = biasWeight. I just added 1 to my input vector and generate an additionol weight for the weights matrix. But im also just learning an not sure if im 100% correct..

  • @usmanyousaaf
    @usmanyousaaf 5 месяцев назад

    Today is my exam again well explain sir

  • @tanvirahmed552
    @tanvirahmed552 2 года назад +1

    Nice video, easily understood the topic, thank you

    • @MaheshHuddar
      @MaheshHuddar  2 года назад

      Welcome
      Do like share and subscribe

  • @mehmetakifvardar
    @mehmetakifvardar 9 месяцев назад

    Mr.Huddar, thanks a lot for perfect explanation. One thing though, how do I calculate the change of bias term for each neuron in my neural network?

  • @usmanyousaaf
    @usmanyousaaf Год назад

    Last night !!
    Today is exam well explain boss

  • @JudyXu-d6j
    @JudyXu-d6j 9 месяцев назад

    I have a question, what if we have a bias term and some bias weights. Do we need to account for those or it would be 0?

    • @MaheshHuddar
      @MaheshHuddar  9 месяцев назад

      Yes you have to consider
      Follow this video: ruclips.net/video/n2L1J5JYgUk/видео.html

  • @msatyabhaskarasrinivasacha5874
    @msatyabhaskarasrinivasacha5874 5 месяцев назад

    It'd an awesome explanation sir....no words to thank you sir

    • @MaheshHuddar
      @MaheshHuddar  5 месяцев назад +1

      You are most welcome
      Do like share and subscribe

  • @web3sorcerer
    @web3sorcerer Месяц назад

    this is a great lecture!

  • @vinayvictory8010
    @vinayvictory8010 Год назад

    At what stage we have to stop Forward and Backward pass unti Error becomes zero

    • @muhtasirimran
      @muhtasirimran Год назад

      this is specified as epoch and must be given in a question. There is no defined condition to stop. It really depends on your needs.

  • @romankyrkalo9633
    @romankyrkalo9633 2 года назад +1

    Great video, easy to understand

    • @MaheshHuddar
      @MaheshHuddar  2 года назад

      Thank You
      Do like share and subscribe

  • @flakysob
    @flakysob 10 месяцев назад

    Thank you so much! You saved me. I subscribed. Thanks

    • @MaheshHuddar
      @MaheshHuddar  10 месяцев назад

      Welcome
      Please do like share

  • @dhanushvcse4056
    @dhanushvcse4056 Год назад

    Sir gradient descant video kocham fastta video upload pannuga

    • @MaheshHuddar
      @MaheshHuddar  Год назад +1

      Videos on Gradient Decent:
      ruclips.net/video/ktGm0WCoQOg/видео.html
      ruclips.net/video/5hB4_8o34GU/видео.html
      ruclips.net/video/ibKP0nIT7YU/видео.html

  • @AbhishekSingh-up4rv
    @AbhishekSingh-up4rv 2 года назад +1

    Awesome explanation.
    Thanks

    • @MaheshHuddar
      @MaheshHuddar  2 года назад +1

      Welcome
      Do like share and subscribe

  • @Ayesha_01257
    @Ayesha_01257 7 месяцев назад

    Very Well Explained ...Keep up the Good Work

    • @MaheshHuddar
      @MaheshHuddar  7 месяцев назад

      Thank You
      Do like share and subscribe

  • @muhtasirimran
    @muhtasirimran Год назад

    I have a confusion. We use ReLU on hidden layer and not sigmoid. Shouldn't we calculate hidden layer's activation using ReLU instead of sigmoid?

    • @MaheshHuddar
      @MaheshHuddar  Год назад

      Yes you have to
      Do calculation Based on activation function

    • @muhtasirimran
      @muhtasirimran Год назад

      @Mahesh Huddar ik. You have used sigmoid function on the hidden layer. This will result in an error.

  • @bagusk_awan
    @bagusk_awan 11 месяцев назад

    Sir, thank you for this great video! It was really helpful. I appreciate the clear explanation

    • @MaheshHuddar
      @MaheshHuddar  11 месяцев назад

      Glad it was helpful!
      Do like share and subscribe

    • @mr.commonsense6645
      @mr.commonsense6645 11 месяцев назад

      GOATED Explaning, mantab mantab

  • @kaavyashree6209
    @kaavyashree6209 Год назад

    Sir how to update bias in back propagation

    • @MaheshHuddar
      @MaheshHuddar  Год назад

      Refer this video: ruclips.net/video/n2L1J5JYgUk/видео.html

  • @25fpslagger81
    @25fpslagger81 5 месяцев назад

    Thank you sir, all of you machine learning videos have helped us students a lot

    • @MaheshHuddar
      @MaheshHuddar  5 месяцев назад

      Welcome
      Do like share and subscribe

    • @hemanthd623
      @hemanthd623 5 месяцев назад

      @@MaheshHuddar Sir Can you solve problems on HMM , CNN pls

  • @tuccecintuglu404
    @tuccecintuglu404 8 месяцев назад

    YOU ARE THE GBEST

  • @srisangeeth4131
    @srisangeeth4131 5 месяцев назад

    Concept is clear , i got confidence in this concept sir,thank you👍👍👍👍

    • @MaheshHuddar
      @MaheshHuddar  5 месяцев назад +1

      Welcome
      Do like share and subscribe

    • @srisangeeth4131
      @srisangeeth4131 5 месяцев назад

      @@MaheshHuddar sir can you provide videos in gaussian process in machine learning

  • @priyaprabhu7101
    @priyaprabhu7101 2 года назад

    Nice Video sir.. where is the bias here

  • @sahmad120967
    @sahmad120967 Год назад

    Great sir, it is very clear example how to calculate ANN. Thanks, keep being productive

    • @MaheshHuddar
      @MaheshHuddar  Год назад

      Thank You
      Do like share and subscribe

  • @sahanazbegam6913
    @sahanazbegam6913 3 месяца назад

    very clear thank you for the content

    • @MaheshHuddar
      @MaheshHuddar  3 месяца назад

      Welcome
      Do like share and subscribe

  • @mdnahidulislam13
    @mdnahidulislam13 5 месяцев назад

    Clear explanation. recommanded...

    • @MaheshHuddar
      @MaheshHuddar  5 месяцев назад

      Thank You
      Do like share and subscribe

  • @SaiNath-cw7yn
    @SaiNath-cw7yn 3 месяца назад

    thank u sir

    • @MaheshHuddar
      @MaheshHuddar  3 месяца назад

      Welcome
      Do like share and subscribe

  • @a5a5aa37
    @a5a5aa37 11 месяцев назад

    thanks a lot for your explanation!

    • @MaheshHuddar
      @MaheshHuddar  11 месяцев назад

      Welcome
      Do like share and subscribe

  • @iwantpeace6535
    @iwantpeace6535 6 месяцев назад

    THANK YOU SIR , BIRILIANT INDIAN MIND

    • @MaheshHuddar
      @MaheshHuddar  6 месяцев назад

      Welcome
      Do like share and subscribe

  • @uday-w3y
    @uday-w3y Год назад

    mahesh daale

  • @ajayofficial3706
    @ajayofficial3706 Год назад

    Thank you Sir, for each and every point explain it.

    • @MaheshHuddar
      @MaheshHuddar  Год назад +1

      Welcome
      Do like share and Subscribe

  • @1981Praveer
    @1981Praveer Год назад

    #Mahesh Hunder, dnt we need bias, just curious

    • @MaheshHuddar
      @MaheshHuddar  Год назад

      Follow this video
      ruclips.net/video/n2L1J5JYgUk/видео.html

    • @1981Praveer
      @1981Praveer Год назад

      @@MaheshHuddar @6:31 min Just curious can you share the web page for these formulae ?

  • @bharatreddy972
    @bharatreddy972 Год назад

    Thank you So much Sir.....This videos made us clear understanding of Machine learning all concepts...

  • @NotLonely_Traveler
    @NotLonely_Traveler 5 месяцев назад

    Finally one that makes sense

    • @MaheshHuddar
      @MaheshHuddar  5 месяцев назад

      Thank You
      Do like share and subscribe

  • @lakshsinghania
    @lakshsinghania 10 месяцев назад

    sir, also each perceptron has bias with it right ?

    • @MaheshHuddar
      @MaheshHuddar  10 месяцев назад

      Yes

    • @MaheshHuddar
      @MaheshHuddar  10 месяцев назад

      Follow this video for bias: ruclips.net/video/n2L1J5JYgUk/видео.html

    • @lakshsinghania
      @lakshsinghania 10 месяцев назад +1

      Thank u sir a lot!! @@MaheshHuddar

  • @user-fl7bm8jc8o
    @user-fl7bm8jc8o Год назад

    Thanks a lot sir 🙏🙏🙏🙏🙏🙏🙏

    • @MaheshHuddar
      @MaheshHuddar  Год назад

      Most welcome
      Do like share and subscribe

  • @adilmughal2251
    @adilmughal2251 Год назад

    Amazing stuff just to the point and clear.

    • @MaheshHuddar
      @MaheshHuddar  Год назад

      Thank You
      Do like share and subscribe

  • @pritampatil4669
    @pritampatil4669 2 года назад

    What about the bias terms ?

  • @tlbtlb3950
    @tlbtlb3950 Год назад

    不错,老印!

  • @MissPiggyM976
    @MissPiggyM976 Год назад

    Very useful, thanks!

  • @iqramunir1468
    @iqramunir1468 Год назад

    Thank you so much sir

  • @sathviksrikanth7362
    @sathviksrikanth7362 Год назад

    Thanks a lot Sir!!!

  • @lalladiva6097
    @lalladiva6097 Год назад

    you are a life saver, thank you soooo much.

    • @MaheshHuddar
      @MaheshHuddar  Год назад

      Thank You
      Do like share and subscribe

  • @vaiebhavpatil2340
    @vaiebhavpatil2340 Год назад

    based

  • @AmnaCode
    @AmnaCode 4 месяца назад

    Thanks for solution

    • @MaheshHuddar
      @MaheshHuddar  4 месяца назад +1

      Welcome
      Do like share and subscribe

    • @AmnaCode
      @AmnaCode 4 месяца назад

      @@MaheshHuddar sure. Thanks 😊

  • @TXS-xt6vj
    @TXS-xt6vj 10 месяцев назад

    you are a legend

    • @MaheshHuddar
      @MaheshHuddar  10 месяцев назад

      Welcome
      Do like share and subscribe

  • @lakeshkumar1252
    @lakeshkumar1252 Год назад

    thank you sir

    • @MaheshHuddar
      @MaheshHuddar  Год назад

      Most welcome
      Do like share and subscribe

  • @satwik4823
    @satwik4823 Год назад

    GODDD!!!!!!!!!!!!!!!!!!!

    • @MaheshHuddar
      @MaheshHuddar  Год назад

      Thank You
      Do like share and subscribe

  • @umakrishnamarineni3520
    @umakrishnamarineni3520 Год назад

    thank you sir.

    • @MaheshHuddar
      @MaheshHuddar  Год назад

      Most welcome
      Do like share and subscribe

  • @SHUBHAMKUMAR-cd7fs
    @SHUBHAMKUMAR-cd7fs 4 месяца назад

    awesome.

    • @MaheshHuddar
      @MaheshHuddar  4 месяца назад

      Thanks!
      Do like share and subscribe

  • @Vishnu_Datta_1698
    @Vishnu_Datta_1698 10 месяцев назад

    Thank you

    • @MaheshHuddar
      @MaheshHuddar  10 месяцев назад

      Welcome
      Do like share and subscribe

  • @arj1045
    @arj1045 Год назад

    well done sir

    • @MaheshHuddar
      @MaheshHuddar  Год назад

      Thank You
      Do like share and subscribe

  • @redwanhossain7563
    @redwanhossain7563 Год назад

    what about bias

  • @arminmow
    @arminmow Год назад

    You saved me you're a hero thank you

  • @지엔서
    @지엔서 8 месяцев назад

    good video

    • @MaheshHuddar
      @MaheshHuddar  8 месяцев назад

      Thank You
      Do like share and subscribe

  • @jacki8726
    @jacki8726 Год назад

    Very helpful

  • @AnubhavApurva
    @AnubhavApurva Год назад

    Thank you!

  • @color2653
    @color2653 2 года назад

    Thank you

    • @MaheshHuddar
      @MaheshHuddar  2 года назад +1

      Welcome
      Do like share and subscribe

  • @SourabhKumar-nr1yq
    @SourabhKumar-nr1yq 10 месяцев назад

    🙏🙏🙏🙏🙏🙏

    • @MaheshHuddar
      @MaheshHuddar  10 месяцев назад

      Do like share and subscribe

  • @tatendamachikiche5535
    @tatendamachikiche5535 2 года назад

    good staff thank u

    • @MaheshHuddar
      @MaheshHuddar  2 года назад

      Welcome
      Do like share and subscribe

  • @allahthemostmerciful2706
    @allahthemostmerciful2706 Год назад

    Soooooo Good❤

    • @MaheshHuddar
      @MaheshHuddar  Год назад

      Thank You
      Do like share and subscribe

  • @መለኛው-ተ4ዀ
    @መለኛው-ተ4ዀ Год назад

    txs

  • @ayushhmalikk
    @ayushhmalikk 2 года назад

    youre a legend

    • @MaheshHuddar
      @MaheshHuddar  2 года назад

      Thank You
      Do like share and subscribe

  • @utkarshmangal6559
    @utkarshmangal6559 Год назад +1

    you are a king sir. Thank you for saving me from my exam tommorow.

    • @MaheshHuddar
      @MaheshHuddar  Год назад

      Welcome
      Do like share and subscribe
      All the very best for your exams

  • @makkingeverything6610
    @makkingeverything6610 2 года назад

    thank you man

    • @MaheshHuddar
      @MaheshHuddar  2 года назад

      Welcome
      Do like share and subscribe

  • @ervincsengeri1840
    @ervincsengeri1840 Год назад

    Köszönjük!

  • @imranimmu4714
    @imranimmu4714 2 года назад

    thank you

    • @MaheshHuddar
      @MaheshHuddar  2 года назад

      Welcome
      Do like share and subscribe