1. Gradient Descent | Delta Rule | Delta Rule Derivation Nonlinearly Separable Data by Mahesh Huddar

Поделиться
HTML-код
  • Опубликовано: 7 фев 2025
  • 1. Gradient Descent and Delta Rule, Derivation of Delta Rule, Linealry and Non-linearly Separable Data by Mahesh Huddar
    Gradient Descent and Delta Rule: www.vtupulse.c...
    Machine Learning - • Machine Learning
    Big Data Analysis - • Big Data Analytics
    Data Science and Machine Learning - Machine Learning - • Machine Learning
    Python Tutorial - • Python Application Pro...
    gradient descent machine learning,
    gradient descent algorithm,
    gradient descent in hindi,
    gradient descent neural network,
    gradient descent and delta rule in machine learning,
    gradient descent linear regression,
    gradient descent algorithm machine learning,
    gradient descent with momentum,
    gradient descent python,
    delta rule neural network,
    delta rule in machine learning,
    delta rule neural network example,
    delta rule in soft computing,
    perceptron delta rule,
    delta learning rule in neural network,
    delta learning rule in soft computing,
    steepest descent algorithm,
    perceptron learning algorithm,
    perceptron learning rule,
    perceptron in machine learning,
    stochastic gradient descent,
    stochastic gradient descent machine learning,
    stochastic gradient descent vs gradient descent,
    stochastic gradient descent logistic regression,
    stochastic gradient descent with momentum,
    stochastic gradient descent in neural network,
    stochastic gradient descent implementation in python,
    stochastic gradient descent linear regression,
    stochastic gradient descent example,

Комментарии • 18

  • @1786ayesha
    @1786ayesha 3 года назад +5

    we want back propogation algorithm...and sampling theory as soon as possible ur explaination is excellent

  • @RudroprasadBandyopadhyay
    @RudroprasadBandyopadhyay 4 дня назад +1

    sir differentiation part I couldn't get it. How you done?

  • @pravallikapravs9928
    @pravallikapravs9928 3 года назад +5

    using the delta rule, find the weights required to perform following classifications: vectors (1, 1,-1,-1) and (-1,-1,-1,-1) are belong class (target value +1); vectors (1, 1, 1. 1) and (-1,-1, 1,-1) are not belonging to the class (target value -1). use a learning rate of value of weights. (perform the training for 2 epochs).
    answer please

    • @hrishikeshbade
      @hrishikeshbade Год назад +2

      To use the delta rule, we need to define the activation function, error function, and the weight update rule.
      Activation function: We will use the sign function as the activation function:
      scss
      Copy code
      f(x) = sign(x) = {
      1 if x > 0
      0 if x = 0
      -1 if x < 0
      }
      Error function: We will use the mean squared error (MSE) as the error function:
      lua
      Copy code
      E = 1/2 * (target - output)^2
      Weight update rule: We will use the delta rule for weight updates:
      lua
      Copy code
      wi = wi + learning_rate * (target - output) * xi
      Let's initialize the weights and biases:
      makefile
      Copy code
      w1 = 0.1
      w2 = -0.2
      w3 = 0.3
      w4 = -0.4
      b = 0.5
      learning_rate = 0.1
      For the first training example (1, 1, -1, -1), the target output is +1. Let's calculate the output of the network:
      scss
      Copy code
      output = sign(w1 * 1 + w2 * 1 + w3 * (-1) + w4 * (-1) + b)
      = sign(0.1 * 1 - 0.2 * 1 + 0.3 * (-1) - 0.4 * (-1) + 0.5)
      = sign(0.1 - 0.2 - 0.3 + 0.4 + 0.5)
      = sign(0.5)
      = 1
      The output is already correct, so we do not need to update the weights for this example.
      For the second training example (-1, -1, -1, -1), the target output is +1. Let's calculate the output of the network:
      scss
      Copy code
      output = sign(w1 * (-1) + w2 * (-1) + w3 * (-1) + w4 * (-1) + b)
      = sign(-0.1 + 0.2 - 0.3 + 0.4 + 0.5)
      = sign(0.7)
      = 1
      The output is incorrect, so we need to update the weights. Using the delta rule, we get:
      scss
      Copy code
      w1 = w1 + learning_rate * (1 - 1) * (-1) = 0.1
      w2 = w2 + learning_rate * (1 - 1) * (-1) = -0.2
      w3 = w3 + learning_rate * (1 - 1) * (-1) = 0.3
      w4 = w4 + learning_rate * (1 - 1) * (-1) = -0.4
      b = b + learning_rate * (1 - 1) = 0.5
      The weights and bias do not change because the output is already correct for the first training example, so we can reuse them for the second epoch.
      For the third training example (1, 1, 1, 1), the target output is -1. Let's calculate the output of the network:
      scss
      Copy code
      output = sign(w1 * 1 + w2 * 1 + w3 * 1 + w4 * 1 + b)
      = sign(0.1 - 0.2 + 0.3 - 0.4 + 0.5)
      = sign(0.3)
      = 1

  • @livingston8267
    @livingston8267 3 года назад +11

    I wish you were my ML sir🥺🥺🥺

  • @shiva-ef3lr
    @shiva-ef3lr 5 месяцев назад

    Thank you sir

  • @muhannedmtd22
    @muhannedmtd22 3 года назад

    Thank you very much. You helped me a lot

  • @manishdas6525
    @manishdas6525 2 года назад

    Thank you was fun and very good explaination

  • @EnigmaAI-88
    @EnigmaAI-88 Год назад

    Good explanation

    • @MaheshHuddar
      @MaheshHuddar  Год назад

      Thank You
      Please do like share and subscribe

  • @vaibhavchauhan3741
    @vaibhavchauhan3741 Год назад +2

    really sir you are too good . 👍👍

    • @MaheshHuddar
      @MaheshHuddar  Год назад

      Thank You
      Do like share and subscribe

  • @shreyaverma4282
    @shreyaverma4282 4 года назад +4

    👍👍👍

  • @nagavenik4862
    @nagavenik4862 3 года назад

    Sir im not getng d back propagation algorithm video of urs
    Plz can u help me in getng it...