[Quiz] Regularization in Deep Learning, Lipschitz continuity, Gradient regularization

Поделиться
HTML-код
  • Опубликовано: 15 окт 2024
  • НаукаНаука

Комментарии • 11

  • @willsmithorg
    @willsmithorg 2 года назад +7

    Thank you! It was very clear and the practical example (GAN) was very helpful. I always wondered what Lipschitz continuity was, but didn't dare to ask!

  • @DerPylz
    @DerPylz 2 года назад +6

    Thank you for explaining these questions in more detail! :)

  • @urfinjus378
    @urfinjus378 2 года назад +5

    Great!

  • @gergerger53
    @gergerger53 2 года назад +6

    Woohoo, high five for team #without_the_z !

  • @bhartendu_kumar
    @bhartendu_kumar 2 года назад

    This was a very precise explanation for gan training difficulty

  • @sumansaha295
    @sumansaha295 2 года назад +3

    I wasn't prepared for GANs and the trauma it caused me in my masters.

  • @mrCetus
    @mrCetus 2 года назад

    The gradient regularisation was not clear. Could you please refer me to an easy to understand resource?

  • @draziraphale
    @draziraphale 2 года назад +3

    At 1:48 do you mix up L1 with L2?

    • @Phenix66
      @Phenix66 2 года назад +6

      Dang it. Thanks! Somehow slipped through

  • @RohitKumarSingh25
    @RohitKumarSingh25 2 года назад +2

    The gradient regularisation was not clear. Could you please refer me to an easy to understand resource?

    • @Phenix66
      @Phenix66 2 года назад +6

      (very crude): If you have a function f, in this case: your network, and you can make sure that the gradient of the function is always 1, i.e. f'(x) = 1, then you can be sure that f(x) is only linear (cuz it's derivative is 1 everywhere) and that means that there are no local optima. For a local optima, you'd need f'(x) = 0 - which you regularise to not happen.
      The paper in the description expresses it quite well (WGAN with GP)