Softmax (with Temperature) | Essentials of ML

Поделиться
HTML-код
  • Опубликовано: 23 окт 2024

Комментарии • 32

  • @ssshukla26
    @ssshukla26 2 года назад +2

    Great to see a new video after so many days... Will watch it afterwards... thank you Sir....

  • @ninobach7456
    @ninobach7456 11 месяцев назад +2

    This video was one big aha moment, thanks! A lot of weight readjusting

  • @lielleman6593
    @lielleman6593 Год назад +2

    Awsome explanation ! thanks

  • @oguzhanercan4701
    @oguzhanercan4701 2 года назад +2

    Great explanation, thanks a lot

  • @abhishekbasu4892
    @abhishekbasu4892 9 месяцев назад +1

    Amazing Explanation!

  • @victorsilvadossantos2769
    @victorsilvadossantos2769 3 месяца назад

    Great video!

  • @rbhambriiit
    @rbhambriiit Год назад +1

    Thanks for making it simple and clear.

  • @murphp151
    @murphp151 2 года назад +2

    This is brilliant

  • @peterorlovskiy2134
    @peterorlovskiy2134 11 месяцев назад +1

    Great video! Thank you Kapil

  • @SM-mj5np
    @SM-mj5np 21 день назад

    You're awesome.

  • @kalinduSekara
    @kalinduSekara 7 месяцев назад +1

    Greate explanation

  • @mrproxj
    @mrproxj 2 года назад +1

    Hi, thanks for this video. Now I know why my classifier always predicted with such high confidence, be it correct or incorrect. Could there be something else other than temperature to solve this? I would like to determine how confident the model is in its prediction. Is temperature the way to go?

    • @KapilSachdeva
      @KapilSachdeva  2 года назад +1

      Another technique is called label smoothing. It is related but applied to ground truth labels. See - proceedings.neurips.cc/paper/2019/file/f1748d6b0fd9d439f71450117eba2725-Paper.pdf
      Also there is something model calibration but I have not yet applied them to neural networks.

    • @mrproxj
      @mrproxj 2 года назад +1

      Thanks a lot. This will come a lot in handy!

  • @krp2834
    @krp2834 2 года назад

    Instead of using using exp function in softmax to make logits positive what if we shift the logits by least logit value [1, -2, 0] => [3, 0, 2]. This also ensures relativity between logits.

    • @KapilSachdeva
      @KapilSachdeva  2 года назад +1

      Thanks Prasanna; forgot to mention that the transformation should be differentiable.

    • @Gaetznaa
      @Gaetznaa 2 года назад

      The operation is differentiable; isn’t it just an ordinary subtraction (by 2 in the example)?

    • @krp2834
      @krp2834 2 года назад

      @@Gaetznaa The min operation which is required to find the minimum logit to subtract is not differentiate I guess.

    • @ssssssstssssssss
      @ssssssstssssssss 2 года назад

      ​@@krp2834 The min isn't differentiable, but it's still a differentiable function at other points. But if you do that, the minimum value will be guaranteed to always have a "probability" of zero. That may not be desirable... It also will prevent you from using loss functions like KL Divergence or Cross Entropy. Also, they will not be "logits". I suggest you review the definition of logit

  • @behnamyousefimehr8717
    @behnamyousefimehr8717 7 месяцев назад

    Good

  • @zhoudan4387
    @zhoudan4387 4 месяца назад

    I thought temperature was like getting a fewer and saying random things:)

    • @KapilSachdeva
      @KapilSachdeva  4 месяца назад

      Depends on the context. Here it is about logits. In LLM apis it is to control the stochasticity/randomness.

  • @HellDevRisen
    @HellDevRisen 5 месяцев назад

    Great video; thank you :)