4. Stochastic Gradient Descent

Поделиться
HTML-код
  • Опубликовано: 12 сен 2024
  • A recurring theme in machine learning is to formulate a learning problem as an optimization problem. Empirical risk minimization was our first example of this. Thus to do learning, we need to do optimization. In this lecture we present stochastic gradient descent, which is today's standard optimization method for large-scale machine learning problems.
    Access the full course at bloom.bg/2ui2T4q

Комментарии •