Lenka Zdeborová - Statistical Physics of Machine Learning (May 1, 2024)

Поделиться
HTML-код
  • Опубликовано: 30 сен 2024

Комментарии • 10

  • @atabac
    @atabac 4 месяца назад +4

    wow, if all teachers explain things like her, complexities are simplified.

  • @ozachar
    @ozachar 4 месяца назад +3

    As a physicist, but non expert in AI, viewer: Very interesting insights. Over-parameterization (size) "compensates" for sub-optimal algorithm. Also non trivial that it doesn't lead to getting stack in fitting the noise. Organic neural brains (human or animal) obviously don't need so much data, and also are actually not that large in number of parameters (if I am not mistaken). So for sure there is room for improvement in the algorithm and structure, which is exactly her direction of research. A success there will be very impactfull.

    • @nias2631
      @nias2631 4 месяца назад

      FWIW iIf you consider a brain's neurons as analogs to neurons in an ANN then the human brain, at least, has more complexity by far. Jeffrey Hinton points out that the mechanism of backprop (chain rule) to adjust parameters is more efficient by far than biological organisms in its ability to store patterns.

    • @nias2631
      @nias2631 4 месяца назад

      That efficiency is what worries him and also points to a need for a definition of sentience arising under different learning mechanisms than our own.

  • @kevon217
    @kevon217 4 месяца назад +2

    Excellent talk. Love the connections and insights.

  • @theK594
    @theK594 4 месяца назад +1

    Fantastic lecture! Very clear and well structured! Thank you, diky🇨🇿!

  • @SSinse
    @SSinse 3 месяца назад

    Pleasure to listen

  • @shinn-tyanwu4155
    @shinn-tyanwu4155 4 месяца назад +1

    You will be a good mother please make many babies 😊😊😊

  • @forcebender5079
    @forcebender5079 4 месяца назад +5

    想要理解机器学习内部黑箱,要靠更进一步的人工智能,由更先进的人工智能反过来解析黑箱,破解黑箱的机理,靠现在用人力去理解黑箱内部机制是不可能的。