Gradient Descent and Stochastic Gradient Descent

Поделиться
HTML-код
  • Опубликовано: 22 июн 2020
  • A lecture that discusses gradient descent and stochastic gradient descent in deep neural networks. We discuss the effects of learning rates that are too large or too small. We discuss convergence rates for gradient descent and stochastic gradient descent for convex functions.
    This lecture is from Northeastern University's CS 7150 Summer 2020 class on Deep Learning, taught by Paul Hand.
    The notes are available at: khoury.northeastern.edu/home/h...
    References:
    Li et al. 2018:
    Li, Hao, Zheng Xu, Gavin Taylor, Christoph Studer, and Tom Goldstein. "Visualizing the loss landscape of neural nets." In Advances in Neural Information Processing Systems, pp. 6389-6399. 2018.

Комментарии • 2

  • @fakhermokadem11
    @fakhermokadem11 3 года назад

    This is video is a hidden treasure of RUclips. Thank you Paul!

  • @gana1597
    @gana1597 3 года назад

    Thank you!! Excellent explanation!