Custom Activation and Loss Functions in Keras and TensorFlow with Automatic Differentiation

Поделиться
HTML-код
  • Опубликовано: 27 окт 2024

Комментарии • 26

  • @mockingbird3809
    @mockingbird3809 5 лет назад +3

    Great Video on Auto Grads, Amazing as Always, Loved it Dr.Jeff

  • @ChandraShekhar-rn9ty
    @ChandraShekhar-rn9ty 4 года назад +1

    Hi Jeff. Thank you so much. I spent couple of hours figuring out how the hell Keras is managing any changes in custom loss so easily. I was worried if it is even checking if the function is differentiable. With this video, things are pretty clear now.

  • @NisseOhlsen
    @NisseOhlsen 2 года назад

    Small correction @1:36 : You don't "take the partial derivative of each weight". You do take the partial derivative of the loss function with respect to each weight. Also @7:24, the derivative of x^2 is 2x, not x. Also, @7:46, that IS the definition of ANALYTIC derivation. It is also used in the discrete case, the difference being that the jumps are finite, not infinitesimal.

  • @subhajitpaul8391
    @subhajitpaul8391 Год назад

    Thank you so much for this amazing video.

  • @kbd2820
    @kbd2820 3 года назад +1

    learning from the legend. It was an amazing experience. Thank you

  • @prajith3676
    @prajith3676 4 года назад

    i was actually looking for this gradienttape() everywhere , thankyou..finally my doubt is cleared ... :-)

  • @tonihullzer1611
    @tonihullzer1611 4 года назад

    Awesome work, liked and subscribed, excited to see more.

  • @HA-pz7mv
    @HA-pz7mv 3 года назад

    Great video thanks!!

  • @maulikmadhavi
    @maulikmadhavi 4 года назад

    super explanation! subscribed!

  • @tanyajain3461
    @tanyajain3461 4 года назад +1

    does gradient tape break when math operations are applied on custom indexes of input_tensor? also while stacking tensors and then using in our loss function? Please suggest a workaround, I've been trying to implement it but it returns all gradients as NaN

  • @heecheolcho3246
    @heecheolcho3246 3 года назад +1

    Thank you for a good lecture.
    I have a question.
    y = tf.divide(1.0,tf.add(1,tf.exp(tf.negative(x))))
    vs
    y = 1.0/(1+tf.exp(-x))
    Is there any difference?

  • @slime121212
    @slime121212 4 года назад

    Thank you for this video, this question was very important to me and now I know how to work it out)

  • @StormiestOdin2
    @StormiestOdin2 5 лет назад +2

    Hi Jeff,
    Thank you for all these great videos. I have a question about tensorflow. If I create a model but have no hidden layers does this make my model not a neural network but linear discriminant analysis. Like this
    : model = keras.Sequential([
    keras.layers.Dense(12, activation="relu"),
    keras.layers.Dense(3, activation="softmax")

    • @HeatonResearch
      @HeatonResearch  5 лет назад

      Its both at that point.

    • @StormiestOdin2
      @StormiestOdin2 5 лет назад +1

      Ahh. thank you. Really appreciate all the videos you put on RUclips has helped me loads with making my own neural network:)

  • @tonsandes
    @tonsandes Год назад

    Hi Jeff, is there a way to access y_pred information?
    I want to build my loss function, but not using a conventional way, which pass y_pred and y_true to a tf of backend function.
    I need to apply a step to access y_pred information and after that apply a function to estimate the std and return the std value as the output of my loss function.
    Do you know how to do this?

  • @brubrudsi
    @brubrudsi 5 лет назад +2

    Also, in the beginning you said the derivative of x^2 is x. It is 2x.

    • @HeatonResearch
      @HeatonResearch  5 лет назад +6

      Yes you are correct, good point. All the more reason for me to use automatic differentiation. 😞

  • @shunnie8482
    @shunnie8482 3 года назад

    Thanks for the amazing explanation, I finally understand GradientTape (I think at least haha).

  • @SrEngr
    @SrEngr 2 года назад

    Which version of tensorflow is this?

  • @zonexo5364
    @zonexo5364 4 года назад

    Stranget, why did I get "Tensor("AddN:0", shape=(), dtype=float32)" as output instead?

    • @zonexo5364
      @zonexo5364 4 года назад

      realise that in tensorflow, I have to run a session - with tf.Session() as sess: print(dz_dx.eval())

  • @brubrudsi
    @brubrudsi 5 лет назад

    The derivative of 4^2 is 0, not 8.

    • @mockingbird3809
      @mockingbird3809 5 лет назад +3

      I think you should place take derivatives of the function, not the Number inputted values. You should take the derivative of the Function and place the value(A Number) into the derivated function to get the numeric output. You were wrong. 0 comes only if you take derivative a constant Derivative of f(x) = C is Zero, In this case, the funtion is f(x)=X^2.

  • @luchofrancisco
    @luchofrancisco 4 года назад

    Thanks, nice video!