Understanding Deep Learning Requires Rethinking Generalization

Поделиться
HTML-код
  • Опубликовано: 17 ноя 2024

Комментарии • 9

  • @glitchAI
    @glitchAI Месяц назад

    this talk is a gem.

  • @edoson01
    @edoson01 5 лет назад +3

    Very clear, thanks! I think there is an error though in the linear models section - to have an infinite number of solutions to a linear model (equation system) , d ( the number of dimensions/variables) has to be larger than n (number of samples/equations), it's written the opposite in the presentation

  • @jimmyshenmusic
    @jimmyshenmusic 4 года назад

    Thanks for this talk. Very clear and informative.

  • @hamedgholami261
    @hamedgholami261 Год назад

    19:05 is the gaussian distribution conditioned on the class labels, or are the samples drawn from one single gaussian (regardless of the class) and then assigned a label?

  • @redberries8039
    @redberries8039 6 лет назад

    very helpful ...nice and clear ..good job

  • @SKRithvik
    @SKRithvik 4 года назад

    Really nice presentation ! At 33:49 I think it must be d >= n. Could you please explain what e_t*x_i mean at 35:00 ? I am assuming that it means the partial derivative of the loss wrt w evaluated at x_i

  • @szpaku1999
    @szpaku1999 3 года назад

    That was video what i needed

  • @rubbermanburningflowers9204
    @rubbermanburningflowers9204 5 лет назад

    what is the relationship between rank and infinite number of solution?

  • @ProfessionalTycoons
    @ProfessionalTycoons 5 лет назад

    thank you for the talk